This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-06-05
Channels
- # announcements (1)
- # babashka (6)
- # beginners (37)
- # clojure (4)
- # clojure-europe (6)
- # clojure-india (3)
- # clojure-spec (6)
- # clojured (1)
- # clojurescript (14)
- # datalog (5)
- # gratitude (1)
- # helix (3)
- # hyperfiddle (1)
- # interop (6)
- # leiningen (2)
- # off-topic (142)
- # other-lisps (2)
- # pathom (20)
- # releases (1)
- # rewrite-clj (4)
- # shadow-cljs (5)
- # tools-deps (3)
This is really off-topic but anyway: is it just me or Lego changed a lot since I was younger? I was trying to find one for my daughter (want the "duplo" sets for now) and I can't find one that have a sufficient number of blocks... seems that almost every Lego that I can purchase somehow pre-defines what you're expected to build (a house, a zoo, etc) and there's no "generic, just a bunch of blocks" set anymore...
yup! seems pretty sad https://news.ycombinator.com/item?id=30242913
They have changed a lot since the 1980s
You can go to a Lego store and buy as many blocks of as many basic shapes/colors as you want, if you want to get what used to be a "basic set"
but the basic set I bought in 1980s also had a book with a bunch of ideas for things to build, which by the time you flipped through it or did one to three of them, you looked at that big pile of basic blocks and realized it was up to whatever you could imagine.
I don't know where to acquire a book of ideas like that, but I imagine you can find things like that on-line
The discussion linked above I finally started reading, and there are good ideas in there. This book looks interesting: https://www.amazon.com/Lego-Ideas-Book-Unlock-Imagination/dp/0756686067/
There is actual relevance between this and Clojure, I think! Or at least software design advice from Rich Hickey in several of his talks. I often compare API design to modern legos vs. old legos. Old legos were small basic pieces you could combine to make anything you want. New legos are often like 3 or 4 old legos glued together and result in something that can only be used in a few ways. Rich's advice (never explicitly applied to Legos, that I have ever heard): keep pulling things apart.
Wow, that went more on-topic than I though 🙂. Unfortunately, where I live there are no Lego stores so I need to purchase online. I'm struggling to find a good "basic set" for my 2y.o. daughter (I was able to find a basic set on the "normal" Lego but pieces are too small for my child). I'm really thinking about purchasing some Lego equivalent for now, then hope Lego does not change even further...
@U3Y18N0UC this is directly explained in the comments section of a clip from a community episode
European courts decided that blocks weren't copyrightable so the company changed their strategy to the cookie cutter kits
They do a lot more of “generic pieces in a box”-things now than in many many years. Not sure about Duplo, but for regular Lego:
https://www.lego.com/en-us/kids/sets/classic/bricks-and-animals-72816a772ef440649048922dde509c84
Don't know the situation where you live, but a few years ago when we wanted Duplo for our kids we bought two gigantic bags of random Duplo for almost nothing. We spent a few years building and then sent it on to parents in need 🙂
@U3Y18N0UC I bought a generic set of duplo for my daughter. But she was more interested in the little animals and people than the blocks for a long time. These days she also enjoys clicking things together.
@UCW3QKWKT I can purchase from Amazon, but it seems that the prices did indeed go up in the last few years (like, 2 or 3 - I saw some 2019 reviews of a Lego set that the user said he bought for $25 USD, and now the same model in Amazon costs about $86 USD 😱)
Actually RH does reference Legos in Simple Made Easy at ~https://youtu.be/kGlVcSMgtV4?t=1256 when he describes the "knitted castle problem". I'm a SME (not a programmer or engineer) building a supply chain app and I will forever remember listening to this talk 7 years ago. We chose Clojure and Datomic purely because of Rich's philosophy. I could map almost every one of his points directly to mfg. supply chain https://en.wikipedia.org/wiki/System_of_record and their woeful inadequacies. I share links to Rich's talks with (non-coders) investors, other SMEs, etc., all the time. Anyone trying to solve hard problems. Most (7 out of 10) choose to solve them using addition (just introduces more incidental complexity) while ignoring subtraction. I hope they have found his talks as enlightening as I have.
Am I the only one who feels like we're doing something wrong with the proliferation of DI frameworks in Clojure land? Why can't we figure it out? Lisp curse? Weren't there 3 released just this year? It's getting to a point where evaluating them on their merit becomes almost irrelevant, they all do approximately the same thing, probably well. So why do we keep at it?
I haven't really released https://github.com/nivekuil/nexus yet, but I wrote it out of genuine anger with integrant
I really don't like framework proliferation either and that's why I just decided to lean on an existing one, pathom. I have a similar feeling with CLI frameworks, I think you could just use reitit to power both a cli and REST api at the same time
I've been curious about that too. I wonder if it's just something like text editors where it's a highly personal preference
XTDB wrote its own DI framework too.. maybe @U899JBRPF could speak to that
At my job we use Component and briefly considered switching but decided that it was Good Enough ™️ . I wonder if the issue is people aren't willing to settle for Good Enough for their particular use cases. (I use Integrant in my personal projects but not at the cost of a week of work just to achieve what we already have)
The cost of creating a new one is quite low. One could ask: why aren’t there even more of them?
There is no Board of Libraries that prevents people from publishing whatever someone wants to open source, so duplication of hobby level time effort is commonplace
Perhaps that can partially be attributed to the Lisp Curse, if you realize that such things only take hobby level time effort in a Lisp family language. In some other languages, such things are said to require so many person-hours of effort that people are more likely to take the first one that is offered, and/or try to enhance a popular choice rather than create a new alternative
this is true but besides the point, I think. It may be cheap to write one but gets expensive to evaluate all of them, and there's no clear winner in this space yet
XT does its own thing, yep, but could probably have used Integrant after all. We went with full control to ensure a happy non-Clojure experience (i.e. JSON-first config), and there was some early concern about a functionality gap with Integrant that might have been overblown. James could definitely elaborate more if there's interest 🙂
Who says you have to evaluate them all?
They didn’t all need to be written, and they don’t need to all be paid attention to :-)
but which need to be paid attention to?
The original question was along the lines of: why are there so many libs satisfying similar desires? It seems to me one reasonable way to answer that question is to ask publishers of newer Libs to give a rationale for why they created it, preferably with some comparison to earlier popular libs in the same category. They are not obligated to do so, but you could ask them to
Not unlike Clojure having a written published ratioanle for its creation, since there were already several thousand programming languages at the time it was created
The more of them exist, the more elaborate the comparison matrix becomes. I'd rather a minimal standard / API everyone could develop against
tldr: DI is good, DI libs are bad dependency injection is a good idea if you genericize it to mean: "don't reach out to magic locations to get stuff". most DI frameworks are imo not bringing enough extra value to be worth the cognitive effort to understand how it is bringing DI to your code. you are much better served by making exactly what you want for your app (but you should probably just keep that to yourself).
Does cognitect not use Component internally?
I am not aware of any way to test stateful systems other than by exercising them in test harmesses going through sequences of steps, and somehow observing that they did the expected thing, either by spot checking a few selected observable things, or by going to the much bigger effort of creating a separate reference model that emulates what should happen (that latter approach is more rare because of the cost of developing the reference model - it is commo it used in hardware development or life critical software vs less life critical software)
DI isn't magic, it just provides indirection for access to resources. if you have 10 things to start, you can start those 10 things and give resource references to whatever code needs it. that does not require any framework. if your needs get complex enough you may move to wanting something that provides more functionality. the existing frameworks have different takes on that, but you can also just make something tailored to your needs. but I think people do this way earlier than they need to.
I spend a lot of time using the Maven libs and they have the wreckage of 3 or 4 generations of DI, all of which is way harder than if they had just constructed objects and passed them to things that needed them.
It's a lot simpler if you work at a correct level of abstraction (re language of the system)
@U064X3EF3 it: a system-ish language, bit like Erlang's otp behaviors or core async pipelines (but more needed), where the developer works in mostly and not in the implementation logic levels. Business logic would be a parameter in this model
XTDB DI is convenient for DI uses but if something crashes during initialization then I'm not sure how to shut it down gracefully :thinking_face:
And it's used in a place where it has to bind to external resources (kafka, postgres, S3, REST API). So occasional exceptions may not be possible to prevent :thinking_face:
I.e. it helps to reduce size of the start-up config but then the way the DI could be coded may prevent the developer from shutting it down correctly and it's probably a trade-off between binding many basic things and shutting down correctly :thinking_face: But then the factory functions could also simply return cleanup fns as react does in useEffect
.... (Hey, why don't we create one more framework...)
Sometimes even software pros use DI frameworks where they need something else (IMO).
I went the other way and tried to build on top of component to provide a more user friendly API 😅
Oh yes, the API of component is something I always have to relearn 😄 But it works and it shuts down as I expect it to do.
Also one more difference between component and XTDB DI is that XTDB's DI returns node
itself (it's a database) instead of the all-containing object as in component.
rationale is a good point. updated my own project with clearer language on that. thanks!
https://github.com/weavejester/integrant#rationale > Integrant was built as a reaction to fix some perceived weaknesses with Component.
Then Clip was built as a reaction to Integrant, then Buddy, another I can't recall, by Serioga (hope I'm spelling his name right) and now init
I think we need to make a graph
I have no idea about sorting, sorting could be a client-side thing. Also the more I think of it the more it starts to look as a skill tree.
What dependencies does a person need ingest to produce yet another dependency injection framework? Could be called "dependency injection for dependency ingestors". But yes, the graph itself would be nice to look at. I wouldn't care about what's actually there but maybe it could be something.
The graph that I thought about was the graph/tree that would say that "hey, this library already influenced all of these libraries, please look them up before you do anything" Because it's not only DI frameworks that are influenced by each other. It's also programming languages, databases, OSes and so on. It's not a dependency tree but it's also problematic because in the end everything influences everything else. This is why I thought about skill tree too. Maybe somebody else has better idea. This also could address the problem stated by Uncle Bob which says that "every 5 years the programmer population doubles, which means that average programmer has 5 years or less experience"
Kind of. But the links would say "less crappy" or "more OOP". These could basically be the sentences from original library authors that say that "I wrote this because I was fed up with library X". But we would either need to mine github or authors themselves would need to submit this. And keep them short :thinking_face: Anyway. I have to do some work. I'm procrastinating too much already.
the ecology of software
There is this thing: https://alternativeto.net/ But it doesn't go to library level.
I think https://en.wikipedia.org/wiki/Sayre%27s_law is relevant here: > In any dispute the intensity of feeling is inversely proportional to the value of the issues at stake.
Sadly, in various domains people are happier to rewrite everything from scratch than to take an existing repo, make a good-faith assumption that is was designed reasonably well and then improve it, step by step, aided by the git history, issue tracker, etc It also tends to imply a commitment to avoid breaking API changes. It's always easier to dismiss something as conceptually flawed. That's rarely the case in Clojure projects. Ideas are cheap, high-quality implementations and long-term commitments are not
I'm reasonably optimistic about the situation being improved thanks to the better tooling that we have around every year. Better linting and IDEs = increased ease for understanding and running a foreign project
And probably there's some room for gentle 'teaching' of the virtues of more frequent PRs and high-quality issue reports - these strengthen our bonds as a real community (vs. collection of individuals)
I think this is the Lisp Curse 100%. The effort to create a new one is so minimal, if it was substantial, there wouldn't be many of them. Creating a DI framework in Java for example is a big undertaking, In Clojure, if you don't like minor details of an existing one, you can roll your own in the same amount of time you'd go looking for another one. Clojure is also really fun, making a DI Java framework sounds like a chore, but making one in Clojure is like this fun adventure, so I think you're again more likely to do it. Simple enough and fun enough that it can even be a learning experience.
That said, I agree with Alex, I do not use any DI lib in Clojure. Honestly, I have not tried to use one, but I still don't have a need to do so, I'm not really slowed down or hurt by not using one.
I spoke about this before, but I just use delay
alongside some EDN config files.
My config looks like this:
<environment>-config.edn
And then my code just does:
(slurp
(str
"./config/"
(or (System/getProperty "environment")
(System/getenv "environment")
"dev")
"-config.edn"))
Then I have top-level Vars that load each singleton stateful thing using the config all wrapped in a delay.
And then my top-level functions just grab whatever is needed from those Vars and passes it to the functions it calls.The delay
handles dependencies on its own, if one stateful delay depends on another it just deref
the other.
If I wanted to be fancy and add reloaded workflow to it, I would replace my delays with this: https://github.com/aroemers/redelay and call it a day. But I have not really had a big need for that. Normally I just redef my top-level Vars and that's how I "restart", and I don't bother cleaning up anything haha
Anytime I looked at a DI lib in Clojure I'm honestly confused. All these systems, its so abstract, feel like I get lost at a level which isn't needed.
I think @U064X3EF3 and @U04V5V0V4 (in a defn episode) have the most interesting thoughts here: Why would you want a DI lib in the first place, ie what problem are you solving, and, are there other ways to solve that problem. I wrote an (unfinished) blog series on how we do this at Ardoq over https://slipset.github.io/posts/dependency-injection-perhaps, but tl:dr: You probs have much less state in your system than you think. A lot of things are now accessed over https, rather than over a socked-connection that needs to be open at all times.
if we're going to talk about principles, I am a big fan of the https://12factor.net/config
sometimes one has to access secrets and then Vault or AWS Secrets Manager or whatever is appropriate. The rest of it that you are discussing are developer affordances, which are definitely nice but :man-shrugging::skin-tone-3:
so anyway, System.getenv
is your best DI friend ... and @U0BKWMG5B has a https://github.com/weavejester/environ`environ`https://github.com/weavejester/environ with that 🙂
12 factor app is pretty old by now, best practices have evolved since then, storing config in env vars is outdated in my opinion, and an anti-pattern. Static config is best stored alongside code.
The more interesting part imo is having the building blocks and semantics for building systems. Do we need a DI lib? Something else? Hell, do we need our own OTP equivalent?
there is such a thing ... https://github.com/clojerl/clojerl
So just to clarify. I work in backend web stuff. I don’t think there are no deps in our app which are much dependent on each others in a fashion that’s not easier handled by hand writing the deps. And if you don’t have stateful components, you don’t need the reloaded workflow.
One of the things I’ve come to realize lately is that it’s best to not have problems.
That's probably universally correct, but now I have several databases, aws and vault to talk to Wiring it manually is no big deal but being able to describe it in a single place and the configuration mapping to the system structure is nice
So I guess there is a dependency graph here, right? You ask vault for some usernames/password, then you use those to connect to your databases and sometimes to aws?
(defn config [master-user master-passwd]
(let [creds (vault/get-creds! master-user master-passwd)]
{:creds creds
:db1 (db/connect! db1 (whatever1 creds))
:db2 (db/connect! db2 (whatever2 creds))})
Yes and no, because these aren't the concerns or semantics of the system. The more you elaborate on it the more you end up reinventing one of the DI frameworks. Again, I'm perfectly fine with not using any one of them. I'm not sure what the best way is, including testing, reloaded (?) and collaboration
no big deals are often nicer than having things in one place ... cos you often find that one place doesn't do exactly what you want and then has some added abstractions and blah you're annoyed again
Multi methods dispatching on URI schemas do wonders to alleviate my annoyance, apply twice a day
I'm writing a new service at the moment and decided to give component a shot without defining any new records. It's not bad, but I could also tear it out and build everything without it
Another (almost) totally unrelated observation. Not everything one does needs to be made into a library. Now, I see that if you have a bunch of microservices (yet another problem I don’t have) you will probs end up in making libs of your common stuff, and then it’s of course tempting to open source it and stuff.
The flip side is lots of copy pasted code between those services which is an eyesore and difficult to maintain over a large time scale. What sort of code base and environment are you working with?
OTP is useful in an Actor context, because each Actor is a singleton long lived stateful object, you tend to want to restart a dependency tree of them if anything failed.
But, I guess I never had an AWS client failed on me, its true that I don't supervise them, I just do something like:
(def s3-client
(delay (s3/make-client
{:AWSCredentialsProvider (auth/make-provider-from-cred-file @aws-credentials-path)
:Endpoint @s3-endpoint})))
Your fn that uses this gets passed an s3-client as a param. It doesn't use a global var. but if it were, you'd just redef the var?
You still inject it into all functions except for the top-level one, no dynamic binding involved. And for functional testing the top function I use with-redef. Though at that point I tend to have a real integ test using a real s3-client calling into a dev endpoint.
Think something like:
(defn api-x [request]
"Orchestrates all steps involved to deliver API X's behavior"
(let [a (step1 @s3-client (:param1 request))
b (step2 @s3-client a (:param2 request))]
(->Response a b)))
If you fancy, shove them on a context map:
(defn get-context
[]
{:s3-client @s3-client})
(defn api-x [request]
"Orchestrates all steps involved to deliver API X's behavior"
(let [context (merge (get-context) {:param1 (:param1 request) :param2 (:param2 request)})
a (step1 context)
b (step2 context)]
(->Response a b)))
Though I prefer to be more explicit about exactly what each function takes, so I tend to do the former.How does Component hook itself up? Isn't it basically doing the same thing? Except that you'd be forced to make step1
into another Component no? Which is one thing I don't like about some DI lib, it seems to be infectious through the code base.
And even in reality, I try to be as pure as I can, so I would actually break that step1 into two steps:
(defn step1 get-user-document [s3-client bucket username]
(get-doc s3-client bucket username))
(defn step1_2 update-user [userdoc wtv]
;; Return an updated doc)
(defn api-x [request]
"Orchestrates all steps involved to deliver API X's behavior"
(let [userdoc (step1 @s3-client @bucket (:param1 request))
updated-userdoc (step1_2 userdoc (:param2 request)
b (step2 @s3-client updated-userdoc)]
(->Response b)))
Then I generally test that top-level function by doing with-redef of all impure fns to mock their results to the test I want.
And maybe to be even more realistic to what I do 😝 I'll tend to have a facade. So what happens is I have these impure orchestrator fns, they take an initial input as a map or just many arguments, but generally they take a lot of input so a map is convenient.
(defn api-x-impl
[{:keys [s3-client bucket username wtv]}]
;; Orchestrates between impure only fns and pure business logic fns for api-x)
(defn api-x
[request]
;; Has try/catch, API metrics and audit publishing, retries, and is the only thing accessing the global environment
(try
(with-metrics [m (make-metric "api-x")]
(assert-valid request)
(retry-on [RetryableException]
(api-x {:s3-client @s3-client
:username (:param1 request)
:bucket @default-bucket
:wtv (:param2 wtv))
(catch Throwable e
(log/error e)
(->ErrorResponse e))))
> Yeah, so every static resource I turn into an argument and a component
>
Ya, so for me that's just handled by top Vars wrapped in a delay.
The downside is I don't have a teardown, but you can use the redelay
lib if you want that which are delays extended with a teardown, and so you get restarts as well.
For me, I never teardown and if I want to "restart" I just reload my namespace that has my global vars with the stateful singletons in them.
And then you see, testing api-x-impl
is super easy, you just call it in your test and pass it whatever you want.
So is it with Component the only difference that api-x
would get the s3-client out of some global system map? As opposed to from a delayed Var?
With component the map is often not global, but instantiated at application start.
Also, every component sees only its dependencies at initialization vs. being able to theoretically "touch" everything
At development you'd alter-var-root and bind the system to some global var in user.clj
which you won't often need to look at
(comment (def dev-instance (start-components)))
(comment (.stop dev-instance))
I do this and then I have main
where I have it wrapped in a try-catch or maybe even redefine this same instance
And if I want to check something that involves a DB connection then I make a new DB connection from REPL into the DB namespace. But then it's not even related to the "lifecycle".Where is it stored if its not global? In some closure? And say you have a function that gets a user document from S3, how is that function going to get the s3-client it needs to retrieve the document?
I do three things:
• When run my app from main
then I have whole component-based start-up sequence. And then it's located in the closure.
• But when I want to test by hand I start this same component thing and restart from the main namespace (because then I save it into a top-level var). And then I can reload deep methods too.
• And when I want to test a single call then I spin up a test connection (in this case to S3) and then test that single call.
When you run in main, what holds on to the state afterwards? Say a request comes in, your web server picks it up, it calls your handler, now your handler calls a function and that function needs the s3-client? How is it all hooked up?
If you spawn a non daemon thread it will block and you dont have to hang on to that var
I'm still confused about where the state is stored and how it would reach the function that needs s3-client? Are all the handlers closing over the system map and then the server is holding on to all the handlers?
I'm not sure what you're arguing against.
In one of my repos I remember that I take the deeply nested value of the jetty server (that project is based on Component) and then I .join
on it from the -main
fn. I think this locks up one thread but for me it works.
It may be regarded to as wasting of a thread but I didn't bother to care as there are larger things to solve.
For instance this is how I try out a PostgreSQL call when I have a model that hides the querying part from the rest of the code:
;; db_instance.clj
#_(def db-ref
(delay (hikari/make-datasource (mk-datasource-options (env :database-url)))))
;; users.clj
#_(user-roles-by-id (mk-model db.instance/db-ref) "hXNjD5ZNWope")
So yes, I instantiate the DB in the "prod" namespace but it's a comment. And then I use it in a model creation function as if it would be passed from somewhere else. So when I want to test that external calls work I use the DB namespace directly from commented code. And yes, I have to instantiate the DB for my internal use every time. Maybe I could move it to some kind of user
namespace.
But no, I don't create my every model using the DI framework. There is a place where I create a lot of these because for me there is no point of using DI on everything that moves.
Edit: Also yes, I used encapsulation.I'm not making an argument, I'm just curious, the state has to live somewhere and it's got to be made available to functions down the stack that will need it. A normal web server won't magically hold onto some Component made system map, so is it that a middleware holds onto it? Is it that the handlers themselves are stateteful and close over them, or something else? It's just curiosity, I don't use Component, don't know if I would like it or not, but just curious to understand what it does and how it differs to what I'm doing now.
Out of curiosity. s3-client is brought up as an example. But, IIRC, the aws api is all http, so there is no client state here, other than a url, inane, and passwd? Nothing that needs to be started up or torn down, unlike, say, a pg-connection?
Well PG connection is also based solely on credentials. I'm not sure how S3 client works but I hope it doesn't authenticate on every request. Because PG pool simply holds multiple open connections at the same time. And for S3 I hoped it would still hold some kind of connections (can they for instance listen to file changes? I've never used it but it could be a stateful connection. If they can listen to changes then the client would need to hold some state too).
For me frameworks like integrant seem "alright" up to a point and frameworks like duct seem like a complete overkill. Maybe I didn't need that much modularity yet. For instance I watched this vid: https://www.youtube.com/watch?v=tiWTpp_DPIQ and after 25 minutes it starts to become configuration for the sake of configuration and IMO it's really bad. The author has nice intentions but IMO it's wrong.
Edit: IMO with the inheritance of keywords and 'more than one "strategy"' we would end up with patterns like this (`when-prod` and when-not-prod
could probably be compile-time macros that check if it's prod and IMO it's bad because why not have a basic if
then):
(when-prod
(derive ::web-server/prod-jetty-server-magic ::web-server)
(defmethod ig/init-key ::web-server/prod-jetty-server-magic ... ))
(when-not-prod
(derive ::web-server/jetty-hackable ::web-server)
(defmethod ig/init-key ::web-server/jetty-hackable ... ))
;; also these could trigger during import time. So... Scala's unimported implicits anyone?
Why not have this instead (in this example if (prod?)
still is a macro but now you have all in one place):
(if (prod?)
(defmethod ig/init-key ::web-server [_ _] implementation-1)
(defmethod ig/init-key ::web-server [_ _] implementation-2)))
This would compile the same way and we wouldn't need to have two defmethods in two different places. And we wouldn't need to use the frameworkisms twice. And all implementations would be found in a single place. And we would still benefit from the single-keyword config as the framework author suggested.
Also what you can't do with keyword config is to jump to their implementation so by using a keyword you get rid of IDE's help. So why even bother with multimethods then and not use the reference directly? Why is it so bad to import things? Why do we need to specify services in .edn
files and why is it better than writing code?
Edit2: Do we need a dependency injection slack channel?@U0K064KQV Sean Corfield has an example app that uses component you can look at to see how stuff can be wired up. What he does here isn't atypical in my experience. https://github.com/seancorfield/usermanager-example/blob/develop/src/usermanager/main.clj
lol maybe we should take this to a dedicated channel
In @U01EB0V3H39 example it seems like the system-map is closed over by the handler that is held in-memory by ring-jetty. And then each time the handler is called, a middleware grabs the state from the closed over system-map and injects it into the routes handlers for each route. So the API handlers (aka the routes handlers) would have a map with all the stateful components on it and then it be same as what I do, you'd manually grab from that map and pass it down to other functions or pass down the whole map of components down. I'd have to try it, my first impression is it's a little more complicated for my liking, and reminds me too much of Spring 😝
What do you even expect from a started app? When do you consider your code started? IMO it's when all connections are open and every part of the small or big monolith that you have has started with the configs that it needs to start with. So it's considered "started" when the backbone doesn't change anymore. In my opinion I don't see a word "Spring" in that sentence. It's only incidental that you learned Spring in your life and now brought it up as a negative example without making it explicit what's so negative about it. Spring works and you don't need to fear it like plague, you have the experience to avoid the bad parts (which you also didn't mention (and we also don't have annotations/XMLs/WARs so I'm not sure what you're talking about)). I think you don't know what you want. I think you have analyzed the frameworks too much and at this point you simply need to build something or you will waste even more time. If you want you can have stateful maps in that non-changeable phase of you app. It's a completely valid approach and it has been used in mutable languages for years. And it works because we're in this Slack forum that has Java back-end.
The parts that reminds me of Spring is the system config map and the compoment/using
pieces, and having to wrap things in the Lifecycle protocol and a defrecord (though I think you could use extend-map now and not have records which is nice).
There's a bit more ceremony, and then it's hooked up in a bit of a "frameworky" way, in that it's held in-memory by the server and injected by a middleware. So it has vibes of "it calls you" and a bit of magic, maybe from the fact most people couldn't clearly answer how it's all hooked up, I'm assuming there's some level of copy/pasting the setup involved.
I'm not dismissing it, it obviously works for a lot of people, and Spring is fine, and Component is obviously a lot more lightweight than Spring.
But delays have worked fine for me on multiple production services for years. So I'm also stuck in my way, though at first glance my way seem to involve less, no dependency on a DI lib, no need to hook up anything, just have your top level API handlers grab what is needed from the global delayed Var and inject them where they are needed.
Again, until I try it seriously, I don't consider having given it a fair chance, so I'm withholding final judgement till then. That's just my first impressions. I'm still trying to think of the benefits over what I'm doing, I think reloaded workflow is one, maybe if you're worried people will accidentally access the global it's better to make it impossible by having it stored in the closure as a guardrail, and maybe it helps if you try to create reusable components across projects.
Anyways, it was curiosity. Lots of people think they need a DI lib, some people here mentioned that it's not necessarily needed, I agree, I don't use one and seem to handle similar concerns without one just fine. I think it's good to let others know, there are ways to do without, maybe you don't need a DI lib. Similarly, I was curious if I'm missing out on anything by not having one.