Fork me on GitHub

The polylith tool has this note:

Note: You may wonder why we don't follow the same pattern in the development project as we do for all other projects, by treating the bricks as dependencies, and the reason is that some tooling don't support it correctly at the moment, so we decided to wait till they do.
Does anyone have any specific examples of tooling that doesn't work with this approach? πŸ™‚


Ah, thanks. I missed this as I was scrolling. Thanks. πŸ™‚


On a side note, I'm curious to see you here @seancorfield, given that one of the alternatives to polylith we're considering is based on your excellent blogpost As someone who's been using a similar approach in production, I'm curious if you have thoughts on how these compare? Your approach seems more lightweight. What parts of polylith do you see as the big wins over your approach?


Read my more recent blog posts. I talk about us migrating to Polylith and what we see the advantages as being.


Ah yes, I can see those now. Thanks. That'll make some really interesting reading.


It's nearly 1 am here but I'll be happy to answers questions in more depth during my day:slightly_smiling_face:


Thanks @seancorfield, in that case I won't keep you! πŸ™‚


@UDC7GA4QG I'm back at my keyboard for about 15 mins if you have Qs. I'll be back "full-time" in a couple of hours I expect.


Thanks for the offer of your time @seancorfield. That's really generous. I've had a read of the later articles you've written. Really interesting stuff. After a bit more reading and thinking today, I've concluded (perhaps incorrectly?) that the real power of Polylith comes from the fact that the interfaces are polymorphic/swappable. Rather than encapsulation per-se. I guess that it's the polymorphic behaviour that allows decoupling development and deployment. But it still seems to me that it still adds a decent amount of additional boilerplate compared to plain old projects. But I guess that the interface namespace is the magic that makes that happen.


Polylith is a lot of things, and they all contribute to the overall benefits -- even when some initially seem like boilerplate (I railed against that myself when I first start looking at Polylith -- see some of my posts on about it as well!).

πŸ’― 3

I don't know how much folks really use the swappable nature of components that have the same interface -- we haven't needed it yet -- but just the formalism of having those interfaces and "forcing" code to only interact through them really helps with thinking about component design in terms of naming and coupling. That has been one of the biggest benefits for us so far.

πŸ’― 3

The promise of incremental testing is also very encouraging -- but we can't benefit from that really until everything is converted to bases and components.


The dependency checking the tool does is very helpful, as well as the library usage report. The tooling is very, very useful overall -- but just another piece of the overall Polylith concept.


It's going to take us a long time to reorganize our entire monorepo -- there's a lot of namespace renaming in our future since we haven't been very consistent about that so far! 😞 -- but even the amount of separation of existing code/artifacts into bases (for our CLI-based processes) and projects (for our deployable artifacts) has cleaned up our dev/test/build tooling and scripts.


Thanks for clarifying @seancorfield and the post articles are a nice read. I found what you're saying about the not using the swapped components quite surprising. From my initial explorations, I got the impression fairly core to the idea. It seemed to me that that the polymorphic interfaces are what allows the development environment and production environment to have a different architectures (e.g. RCP in production and direct calling in development). If that's not a good value proposition, it makes me question why someone would choose the proxy function approach of Polylith over over alternatives like putting the public interface functions directly into interfaces.clj (without proxy functions).


I know we're in awkward timezones. Don't feel rushed to get back to me. πŸ™‚


FWIW, you can just put everything into the interface files. That's up to you in terms of implementation details.


Now, that's an interesting thought. I feel like there is still value in doing this Polylith way, but that's worth realising.


I don’t think swappable components are the only value or the most important value of Polylith. To me, separation of development and production is one of the important values of Polylith and it is not only done via swappable components but also by defining projects with their own deps.edn. The projects then becomes the deployable artifacts. For example, we have 5 projects that are deployed on different instances with their own auto scaling setups in production. However, we have another 6th project named dev which combines all 5 projects into one single deployable artifact and we use it for testing and development (to reduce costs and complexity while developing). These projects do not use different components, they use same components but in different combinations. I find swappable components useful in two contexts; 1) when refactoring codebase to use a different dependency or implementing same functionality in a different way. 2) when different environments require different behavior, like, production calling a lambda function vs local calling function directly or production connecting to remote db but tests using in memory db.


Thanks @U2BDZ9JG3, I think you're absolutely right! I overlooked the deps.edn not because I don't think they're valuable, but because we already have a setup with a monorepo and multiple deps.edn files. So we do see some of that value already. Having said that, I do think that smaller components with better interfaces could also be a really great value proposition. I'm interested in what you're saying about using the same components in production and dev. Do you use microservices? If you do, do you have difficulties with multiple microservices co-habiting in the dev project?


I'm asking because I was expecting to hear a lot of talk approaches like the user / user-remote polymorphism shown in the poly tool getting started guide, using http in production and direct function calls in development. But I'm getting the impression that neither you nor Sean actually do that in practice. So maybe that isn't that much of a big deal.


I guess another reason I'm focusing my attention on understanding the value of the interfaces.clj with proxy functions is that this is where a lot of the extra bolierplate comes in. Which is one of my main sticking points with Polylith.


Regarding interfaces, @U1G0HH87L wrote a nice answer on the GitHub issue here:


We are not using microservices and I am mostly against using microservices. We rather have very small Polylith components (max 1000 LOC each), and we combine them in rather macro services and spilt them if there is a non-functional need. We started as one single service in the beginning, then as our business grew, we split it based on auto scaling requirements and the traffic. It is very easy to create a new artifact once there are many small components, its just creating a new deps.edn. I definitely think without Polylith we would not be as productive as we are now. Everyone in our team knows where to look, we speak the same language and terms when talking about our codebase, and its easy to experiment with new features and attach them to existing systems. It takes some time to get used to Polylith and without spending some time and developing some new features without it, it could be hard to see the benefits.


The small size of components (our two largest components are around 700 lines and those are definitely outliers -- most of the rest are under 100 lines) and the clearly defined public "APIs" (`interface`) for each is definitely a win in terms of code organization for us -- I wrote about how it makes us think more carefully about naming and about dependencies. It's also really nice to have the interface functions in strictly alphabetical order instead of having to hunt for the public functions at the bottom of a namespace if there's a bunch of private implementation stuff above it (due to Clojure's declare-before-use rule -- we avoid declare, in general).


I'm finding it also helps me focus on what to test since the interface_test.clj is specifically going to test the interface.clj functions but you can still have impl_test.clj for things in the implementation that you want tested directly.