Fork me on GitHub

When creating an interface from existing code it seems there's repetitive work to be done for each function that needs to be exposed. Is there anything wrong with instead of coding:

(defn foo [arg1 arg2]
  (core/foo arg1 arg2))
(def foo core/foo)


You lose the ability to redefine the function on the fly in the REPL while you're developing.


I think you may lose metadata too. I'm not at my computer to check.


I think it's worth the effort and habit of defining proper functions with docstrings etc since you can see the arguments in the interface so that file becomes much more readable and useful.


BTW this is like the number one FAQ from everyone who is just getting started with Polylith 😁


I'd rather do it the 'def way' then run something that automatically changes the def s to defn s that have args than write them all by hand. Doesn't seem like it would be too difficult a tool to make.


Yeah, I just take the opportunity to write docstrings so it doesn't feel like a waste. You can use (def foo #'core/foo) so you can still easily redefine in the REPL defalias here works to preserve the metadata: You can alter defalias to use #' to get both meta and reloading:

(defmacro defalias-to-var [new-name old-name]
    (def ~new-name #'~old-name)
      (update-alias-meta '~old-name (meta (var ~old-name)))))


It might need some attention on the :name and :ns metadata though. I haven't really looked at what those need to be, but you could probably just dissoc :name :ns from the old meta before merging


I was thinking of a tool that just alters the source code. So looks in core to find the defn with the name foo and reads the args and the doc string and overwrites the interface def with a defn named foo that has the same doc string as core/foo and calls core/foo with all the args.


As we've migrated a lot of code to Polylith (in our 119K line monorepo), we've found there often isn't quite a 1:1 mapping between interface and impl and it can be nice to have different docstrings: for "users" (callers) in the interface, for maintainers (future you) in the impl.

👍 2

While I'm developing, I often just write code in the interface and then refactor it to one or more functions in the impl.


There are several benefits of using interfaces the way we do, as suggest in the documentation: • The interface can expose the name of the entity, e.g. `sell [car]`, while the implementing function can do the destructuring, e.g. `sell [{:keys [model type color]}]` which sometimes can improve the readability. • If we have a in the interface, a simplification can sometimes be to have a single arity function in the implementing namespace that allows some parameters to be passed in as `nil`. • If using in the interface, a simplification is to pass in what comes after `&` as a `vector` to the implementing function. The good thing with the current solution is that you can easily distinguish between def ,`defn` and defmacro statements by just reading the interface. This information is also stored in the ws data structure, that can be used by external tooling. Our experience is that this is perceived as a problem in the beginning, but that the benefits outweigh over time when you get more used to the interfaces. We also try to add as little magic as possible to the tool.

💯 6
☝️ 1

I have refactored some code to Polylith so appreciate that interface and imp functions will often differ. But OTOH copying and pasting for functions that don't differ is still a pain, and an 'interface def -> defn tool' could by default do the three simplifications you mention @U1G0HH87L. I wasn't envisioning the tool be an official Polylith thing, but just an external helper that would ease the time consuming pain of refactoring to Polylith. And perhaps the tool shouldn't do anything with docstrings - 'copying them' sounded wrong even as I wrote it!


I think you're looking at the design process from the wrong end tho'... you're acting as if you write the implementation first and then "copy" parts of it to the interface. I think there's much more value in designing and writing the interface first, and perhaps even writing simple implementations in the interface file and only then building out the implementation (which, as I noted above, can start in the interface file and be refactored to the implementation). In reality, you could have all the code in the interface file and have no separate implementation at all. You'd be losing the clarity that the refactoring/separation brings.

❤️ 3

Really good discussion here. Maybe this thread could be moved to an FAQ in the documentation. What do you think @U1G0HH87L ?


Sounds like a good idea to distill this and add to the documentation!


@U04V70XH6 I'm talking about refactoring existing code, lots of existing code, to Polylith. And the way I was doing it I lumped the existing code into a component (usually not just a core, but several source code files) and then had the job of writing the interface. It seemed quick and easy enough to write def s rather than defn s. Perhaps my way of doing the porting wasn't ideal? I was indeed writing the implementation first, because the implementation is already written when you are refactoring existing code. Never the less what you are saying still applies, but only some of the time, and not so much for the parts of the legacy code that are actually pretty much already in components - where I'm just writing the interface part on top of them.


It’s cool that you are migrating your codebase to Polylith @U0D5RN0S1! It will be interesting to follow.

😳 1

@U0D5RN0S1 We are also migrating existing code to Polylith -- 119K lines of it. We have about 25.5K migrated so far. That's causing a lot of renaming of namespaces (because we weren't consistent about a "top-ns" pattern previously) so we're taking advantage of that to also do other reorganization as we do the migration, hence the focus on carefully structured interfaces. I've talked about this process on my blog at -- especially how Polylith is helping us focus on naming, modularization, and dependencies.


We’re actively developing a new product in Polylith, and will be integrating our earlier product ( into it as well. But I admit—this is our biggest complaint as well. I’d like to see the introduction of something that’s perhaps not called a “component,” but rather a “helper library”, or something. There’s a class of functions I’d like to implement as a component, but see no value in having hot-swappable interfaces. Like “collection” or “string” libraries we’ve built—stuff that has no dependencies, but also feels totally pointless to be creating interface “copies” of. (With docstrings and everything.)


(But to be clear—the benefits of Polylith are huge; we’re not getting hung up on this. It just seems like it’s not quite there with respect to these helper functions.)


Last comment: as I understand it, components are meant to be short and sweet. Ideally their API will reflect that; and thus having a .core and .interface reflection is no big deal. (Totally fair.) But yeah, I mentioned above namespaces that are more like “grab bags” of helper functions. As for us, some of those namespaces are long, with dozens of public-facing functions. Perhaps these are not exactly components?


Perhaps @U04V70XH6’s comment above is all I need?


> In reality, you could have all the code in the interface file and have no separate implementation at all. You’d be losing the clarity that the refactoring/separation brings.


@U0HJA5ZQT before Polylith we had a subproject called lowlevel which had a number of such helper namespaces. When we looked at it through the lens of small, focused components, we realized it should be six or seven separate components, each with its own well-designed interface.


So if it is a utils ns where all are public functions and all used, why not just put it in interface? That makes sense to me. You're not losing anything b/c as soon as you 'see the need', then you can do it properly.


Forgive me for going on a tangent a bit, but a mistake I made, but won't make next time when I start the refactor (from scratch) again - was not to follow the advice in the documentation that states to start off by putting all your code into one component (presumably the implementation) and then start from there. Then your first other component might be an easy one with no dependencies (say utils). And go from there...


Put all your app code in a base. Create a project describing how to build it. Done. Well, aside from the ns renaming it may take to get the entry point of the base to follow the top-ns.base-name convention.

👍 2

We just moved a 24K line app into Polylith by doing that. We haven't refactored it yet. Just followed the docs 🙂


I have a question around interfaces with poly. Hopefully I can articulate this well. Say I have two components "Foo" and "Bar" each implementing the same interface which I also believe means the namespaces have to match. Now say I have another component "X" that has a requires on one of those components. Is the idea that I could say swap the implementation out by just changing the dep from say Foo to Bar in the Project to get a different implementation ?


Yes, that's correct.


Awesome, thanks


And you can use profiles to determine which component you get (which implementation) when working with the development project.


ok, i was wondering about that. If multiple components have the same namespace structure for implementing an interface it seems like the REPL could get confused


profiles-decides-impl is hot - great for e.g. using in staging instead of AWS SES