Fork me on GitHub

Hello 👋 I’ve learned about polylith a few months ago and decided to read through the docs to understand the concepts and the tooling. As someone who applied ports and adapters/hexagonal architecture quite heavily in recent projects, a lot of the stuff just makes sense. However, there’s one thing I’ve been scratching my head about: does polylith encourage swapping component implementations during when running tests? If so how? I see that in the, there’s a profile component with a .store namespace that knows 1. how to run SQL queries using and 2. the database component that defines the database connection. Then, in .store-test, the function that returns the db is re-deffed with the appropriate configuration for the test. For the sake of illustration, let’s imagine a slightly different example: what if the profile component reads data not from the DB, but from an external API over HTTP. I assume access to this external API would be encapsulated into a separate component (let’s call it profile-external-api). But now, when running the tests agains the profile component, the tests that exercise its logic, I also assume that dependency on actual HTTP requests to an external API is undesired. In this scenario, following the example of .store-test above, the test namespace would have to redef all the functions that are part of profile-external-api.interface with mock implementations. If this is the case, the “swapping” feels less like swapping a thing (a component) for another but rather like ad-hoc mock setups here and there. The individual functions are swapped out as needed instead of the component as whole. What if there’s yet another component (let’s call it profile-v2 , excuse me the lack of creativity) that depends on profile-external-api as well. Would the tests of this component also have to mock individual functions of profile-external-api.interface? Would the common mock implementations be extracted into a profile-external-api-mock component but still be used for re-deffing at the function level instead of at the component level? When using system tools like component or integrant, one would simply define a different system for prod, development and tests. The system maps the component names to their implementations. When I look at polylith, that mapping is happening in the project definition (not at runtime but at build time). Maybe the answer here is to have a separate project for running the tests, since that would allow a different set of components to be wired up? Would that make sense? That would make it so tests cannot be run from the development project, which is where one would be running a REPL. Is there something I might be missing?


Swappable component implementations are a useful tool but not the only tool available. You can use Component (or Integrant) with Polylith and use the same approach for testing you already use, swapping in mock Components (or config) just for the tests.


At work, we use swappable components to select either a Hato-backed http-client component or a http-kit-backed http-client depending on the app we're building (we have one legacy app that has to run on JDK 8 and can't use Hato). By default, we use Hato for everything during testing, although one of our projects specifies the http-kit version -- and we can choose to run all the tests using that version if we want, via Polylith profiles.


I look at swappable components as a per-project implementation choice, not as something you would swap for testing.


Yes, that’s how I see it too. When working with testing, you sometimes want a fake implementation of an external system. If you already have a component, e.g. system-abc for the real system, you may want to implement a fake component system-abc-fake and use that in a separate test project for testing and/or in the development project.


An idea could be to add support for profiles for all projects, not just development . Then this kind of swapping of components could be done more easily without duplicating projects.


Thanks for the responses! A few follow-up questions: 1. @U04V70XH6 where do you manage the integrant system in this case? Do you setup it in a base and inject it into components? 2. Do you use protocols and records to specify/implement the http-client? 3. How do you decide which one to use? Env variables? 4. @U1G0HH87L i read the page on profiles and I admit I didn't fully grasp the example given yet. I'll go through it again. Do you believe profiles is how one would specify different configuration between development and testing builds? 5. Would this mean that tests ran from the REPL would behave differently than those ran through, say, kaocha?


@U3RBA0P4L We use Component, not Integrant, so things are a bit different I think. We define the Components and their lifecycles in the components that primarily depend on them, and then we build the complete system in each of the bases as needed. Well, we have a fairly big chunk of Components that generally all get built together and we have an "application" Component that ties all of the commonly-used Components together and that is currently still in a legacy subproject but will migrate to a Polylith component "soon".


For the http-client, no, we just use the Polylith .interface and swappable implementation components. No need for protocols or records there. I've blogged about this a bit


As I explained above, the choice of implementation component is made on a per-project basis, with profiles used for the development project. It's a build-time choice, not a runtime choice. Each of the projects specifies the specific implementation it wants -- most of them use the Hato-backed implementation because they run on JDK 17 or JDK 18, but one of them has to run on JDK 8 and uses the http-kit-backed implementation. It's "just" a dependency in the projects deps.edn file.


We don't have "development and testing builds". We rely on poly test to select the bases and components on a per-`projects` basis, using those projects deps.edn files. Development is done using the :dev alias (the development project) and the :+default profile (alias), and poly test :dev uses that profile by default (which selects the Hato-backed implementation). We can also run poly test :dev +http-kit to run tests using that profile (which selects the http-kit-backed implementation). You could also choose to start a development project REPL using the non-default profile(s) if you wanted to develop against the alternative implementation. You could have two REPLs running and switch between them if you wanted (although keeping them both in sync would be harder).


I run tests from my editor (i.e., via my REPL) when developing, so I mostly test against the default profile. I also keep a poly shell open so I can run complete suites of tests, either against projects or against development with either profile as needed.


The way @U04V70XH6 works is very much how I and @U2BDZ9JG3 work too, except that (if I remember right) Sean also include development when running the tests, by passing in :dev. Furkan’s idea has always been that we should only test the “production systems” (the projects under the projects directory. I haven’t had the need to test the development project in the Polylith repos I’ve worked with, but that doesn’t mean I see any problem with it. I’m not sure which way is the best for you, to test via the development project using profiles, or by mocking. I think you have to experiment and see what works best for you @U3RBA0P4L.


Yeah, I've actually pretty much stopped doing poly test :dev these days. Our CI does poly test since:previous-build and, if successful, moves the previous-build tag to the new HEAD.

👍 1
💪 1

Thanks for all the insight. I guess it makes sense for me to create a small proof of concept codebase to see how stuff works in practice. When I have the time to do so I’ll make sure to report back.


We're at ~42,500 lines of code migrated into Polylith at this point:

projects: 20   interfaces: 44
  bases:    14   components: 45
Out of 125K lines total. I just spent the last three workdays refactoring legacy code into three bases and three projects (which let me retire one of our legacy "projects"). Still a lot of lower-level code to refactoring into components tho'... (another blog post will appear at some point but mostly it's just a refactoring slog at this point!).

polylith 4
👏 3

That's super cool! Did you have to write shim code to bridge between the two codebases, or were you able to nestle one within the other seamlessly?


Some bricks and projects depend on legacy subprojects within the (same) monorepo using :local/root dependencies. Some legacy subprojects depend on components directly via :local/root deps. We're gradually teasing those legacy subprojects apart.


I talk about the process on my blog


Ah, thank you for that tip; I've read your whole monorepos series a couple times but missed that reference. Thanks, also, for that series 👏:skin-tone-2:


Feel free to ask any Qs that the blog posts don't answer!

gratitude-thank-you 2

what do you use to count clojure lines ?


@U011NGC5FFY It's a dumb bash script that does find and wc and a few fgreps 🙂

👍 1

The migration continues:

projects: 20   interfaces: 48
  bases:    14   components: 50
and we're at just over 44,800 lines of Polylith code out of a total of 137k lines in the repo!

polylith 11

@U04V70XH6 We are considering using polylith for an upcoming project. Care to share your experiences so far and what benefits/synergies or potential cons you have run into?


@U04V70XH6 ok just saw your blog - will start by reading through that. Thanks for writing it down, very useful


@U4VDXB2TU Feel free to DM me with any Qs from reading the series of blog posts -- I'm well-overdue on another post but a data center migration project took a lot of my time away from code lately (unfortunately).


And now:

projects: 20   interfaces: 57
  bases:    14   components: 59
and just over 48,300 lines in Polylith! But it has been "a week" working on this migration... and still a long way to go (only 35% done!).

polylith 9
clojure-spin 1

The bulk of the recent refactoring has been to "invert a pyramid" where we used to have a very high-level ns that called down into a lot of code that depended on some large "ball of yarn/mud" legacy subprojects, but we needed to make that piece easily callable from lots of other places (both legacy and Polylith) -- so I moved it into a component and then refactored everything that it called directly into components (some new, some existing). Just enough refactoring to remove any dependence on legacy subprojects from this ns.


It's touched 116 files so far (in this one PR I'm working on).


Nice! Sounds like Polylith makes it easier to tease things apart in a more controlled way?


The refactoring is painful, but the end result is better. Polylith is forcing us to tease things apart in a more controlled way. It's like medicine: it tastes nasty but it is doing us good 🙂

😊 5
💊 2