This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (35)
- # babashka (31)
- # beginners (77)
- # biff (23)
- # calva (1)
- # clj-kondo (4)
- # cljsrn (3)
- # clojure (71)
- # clojure-dev (9)
- # clojure-europe (51)
- # clojure-france (3)
- # clojure-germany (1)
- # clojure-nl (3)
- # clojure-spec (9)
- # clojure-uk (42)
- # clojurescript (24)
- # clojureverse-ops (3)
- # component (16)
- # cursive (1)
- # data-science (8)
- # emacs (1)
- # fulcro (5)
- # graalvm-mobile (1)
- # graphql (2)
- # honeysql (36)
- # leiningen (3)
- # malli (3)
- # off-topic (16)
- # remote-jobs (1)
- # sql (3)
- # testing (19)
- # tools-deps (11)
- # xtdb (20)
Yeah, there are two schools of thought on this — both of which pass one component around in high-level code. 1. If most of your high-level function ultimately need to pass “all” of the parts down into functions they call, you might as well just use the “system” component as-is. 2. At the top-level, pass in a custom
component/system-map that contains only the pieces that are truly needed. I’m in camp #1 and my colleague @hiredman is mostly in camp #2 I think — certainly our Ring handlers differ in approach: I tend to just
assoc in the “system” component (as
:application) but he often builds a Ring-handler-specific component (for each handler) and declares exactly the dependencies that are needed within that Ring handler.
The upside of his approach: you aren’t passing around unnecessary pieces of the overall system (so it follow’s the Usage Notes more closely). The downside of his approach (in my opinion): if you need to modify a handler to call some new function that needs a part of the system that is not available in the handler-specific component, you’ve got to track back to where it is created and add another dependency — so the code changes are not as localized. The upside of my approach: if you pass around the whole “system” in the higher layers, you’ve always got all the pieces you need for the lower layers. The downside of my approach: you’re passing around a lot of stuff that is potentially unnecessary so you can’t just look at a handler’s declared dependencies and know what it can and cannot call — and there’s also a temptation to add calls at the “wrong” level since there’s no pain associated with picking apart the “system” anywhere in that call chain to add calls to new functions.
A real example of this is that I’m refactoring our codebase to Polylith and we have tests that need our “system” component — or rather a subset of it. Not all the pieces of our “system” have been migrated to Polylith yet but the core pieces have (caches, configuration, datasource, environment, host services) so the tests have a fixture that builds a Component with just those pieces needed for the particular functions under test. Which is better than how we handle it in legacy tests which just build the entire “system” (which is a pretty sprawling beast).
Make take is, if you pass in the whole system, then you cannot parameterize things, like maybe you have some kind of abstraction over payment gateways, a protocol G and you have some handler H, and given a G, H will provide a webform for some payment method. If H looks up a G implement directly in the system map, 1. You can't change the name of G in the map with changing H and 2. If you have two different Gs and want two different instances of H parameterized with different Gs you'll need to specially support that in H somehow instead of just using components existing dependency mechanism
I am still kind of having polylith bounce around in my head, but I think it basically has the same issue
Because it builds on clojure namespaces and clojure namespaces are not parameterized
So if I want two different implementations of the same interface in a project I am out of luck
The interface can wrap a polymorphic implementation based on protocols which is probably how I would structure that sort of scenario.
I’m also yet to be convinced of Polylith’s real-world swappability at the component level 🙂
That’s all very interesting. Thanks for the explanations. For a sufficiently large
system, I find myself leaning towards #2 (just passing in a subset of the system that only has what is needed) over #1 (passing in the entirety of the system).
However, my preference is to pass in a function’s context as separate parameters. This makes it very clear which things are being used / are actually required whereas passing in an arbitrary system map means I’m potentially passing around things that are never used. Once I get to a point where the “context” / number of parameters is large, that’s a code smell that indicates to me that I need to consider refactoring my system in some way. I think just moving everything into the system map and passing the system map in can mask some of those complexity issues without actually addressing them.
I’ve been looking at Polylith too, but based on their “real-world-example” app I’ve yet to see how it really addresses the issues I’m having here
No idea if it will help @stephenmhopper but here’s a version of my Component-based usermanager example app converted to Polylith: https://github.com/seancorfield/usermanager-example/tree/polylith
Thanks for the example. I don’t think this particular app I’m working on is to a point where it makes sense to move it to Polylith. I do appreciate how in your example Polylith app, you still push configuration components down to the layers that need them. Your user component for example requires the DB component and passes it to a function that uses that component when running the query. The “real-world-example” polylith app doesn’t do this. The user component would just reference the DB namespace directly and call it and the DB namespace embeds the config information, hiding it from the caller. I much prefer your approach to the other example’s approach
@stephenmhopper Yeah, for a “real world” example, it’s still somewhat contrived. For us at work, we pull the database config from external files, so our
Database component depends on our
Environment component and both of those also depend on our “host services” component (for hostname, JMX beans, etc). So our “system” always has at least those three plus a
Caching component — and pretty much everything uses some combination of those. When our “system” starts up, that means “host services” is
start’d first, then
Database, etc until everything is running and that combined “system” is passed around through all the top-level code (mostly Ring handlers).
My feeling re: 1 param vs several params — if the function doesn’t touch those (subcomponent) params and just passes them all through to functions it calls, having multiple parameters adds no value, and just creates a maintenance problem if a lower-level function needs an additional subcomponent for whatever reason. Better to have a single Component passed through — even if it is a #2 custom one — than multiple params. I only write functions taking multiple Components if they are specifically using them directly, then it is their immediate caller’s responsibility to provide the appropriate things. In my view, it’s the same argument in favor of passing a single hash map instead of a lot of separate parameters: code readability and easier maintenance. I don’t think there are any hard-and-fast rules about which way to go — I think you’ll develop an intuition about it after you’ve been working with a codebase for a while.