This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-11-08
Channels
- # announcements (42)
- # aws (2)
- # babashka (69)
- # beginners (38)
- # calva (18)
- # cider (39)
- # circleci (1)
- # clj-commons (10)
- # cljs-dev (2)
- # clojure (36)
- # clojure-australia (14)
- # clojure-europe (25)
- # clojure-gamedev (40)
- # clojure-losangeles (4)
- # clojure-nl (5)
- # clojure-sweden (1)
- # clojure-uk (5)
- # clojurescript (133)
- # core-logic (24)
- # cursive (7)
- # datalevin (4)
- # datascript (3)
- # figwheel-main (1)
- # fulcro (45)
- # honeysql (1)
- # integrant (43)
- # introduce-yourself (1)
- # jobs (4)
- # leiningen (3)
- # lsp (32)
- # nextjournal (9)
- # pathom (18)
- # polylith (21)
- # portal (65)
- # re-frame (6)
- # releases (1)
- # remote-jobs (1)
- # reveal (12)
- # rewrite-clj (1)
- # sci (84)
- # tools-deps (22)
I'm contributing to a Poly migration (or "optimistic assessment" i.e. just do it and maybe adjust things later) and am a bit concerned about the functional interfaces thing.
I'm not seeking to question them (just like I'm not looking to be convinced to adopt them). My questions are a bit quicker hopefully:
• can I successfully use Polylith with vanilla protocols and the Component library? Is it just a matter of placing defprotocol
in the right place and it will just work? Do you know of existing teams going this route?
• Specifically, I would guess that the basic functionality (build prod artifacts + build dev repls) would just work. Of the other tooling, are most things coupled to an assumption of "functional interfaces" or not so much?
◦ (In the end using my own test runner instead of poly's own is easy enough. No big deal, I'm just curious)
I can answer some 🙂, @U1G0HH87L and @U04V70XH6 can elaborate more:
1. I don’t see any reason that it shouldn’t work. Our team does not use it that way though.
2. Most of the tooling that works with tools.deps should work with Polylith as well. Polylith uses a regular deps.edn
to define sources and dependencies and to create a classpath. Polylith’s test runner gives you incremental tests, however you do not have to use it. We have plans to make test-runner configurable in the future so that you can plug in your own test-runner.
Thanks! Sounding good :)
poly check
is another thing that might be sensitive to the pattern that one is using, wdyt?
@U45T93RA6 We use Component heavily at work and it is a good fit with Polylith. Not sure what protocols you're referring to in that context though. Where we have our own protocols, we put them in <top-ns>.<brick>.interface.protocols
and require them elsewhere as needed. There's no conflict with Polylith's expectations here -- but we do strictly follow the interface
convention (with one or more implementation namespaces).
The Component implementations (records) are generally in the impl
namespaces and we usually have a specific constructor function declared in the interface
namespace rather than exposing the record anywhere. The records we use are pretty much all entirely implementation details.
> <top-ns>.<brick>.interface.protocols
Interesting, thanks! Might go for this. I think that people (certainly not only me), by default when encountering the concept of "functional interfaces" in the doc/faq might be tempted to believe they entirely replace the notion of a protocol.
Good to see that it's not a black-or-white choice.
It seems to be an FAQ "Why not use protocols?" but they're not equivalent and you can use protocols just fine with Polylith, but that's not what multi-implementation bricks are about. It's hard to find good words to describe it in a way that isn't confusing.
We have one multi-implementation brick at work -- that doesn't involve protocols at all -- and we have several bricks that have associated protocols that have multiple implementations but not in the "swappable" sense of Polylith's bricks.
We have a ws.http-client.interface
in two bricks -- http-client-httpkit
and http-client-hato
-- and all our HTTP code is written against that functional interface. Then the :dev
and other projects
specify which implementation they want at "build" time (classpath build time for :dev
, actual JAR build time for the other projects
). And it's great that poly test
figures out the classpath to use based on profiles (for :dev
) or on the actual :deps
for other projects
, and the classloader isolation makes that all possible in a single JVM process.
poly test :dev
vs poly test :dev +httpkit
runs the :dev
project tests with Hato or httpkit respectively (the :+default
alias / profile selects the Hato client, the :+httpkit
alias / profile selects the httpkit client).
Sounding good. Selecting impls wouldn't be my bread and butter but I can see how it can be occasionally useful.
I just started using polylith and I think I had the same assumptions as @U45T93RA6, i.e. I was expecting some namespacing conventions for interface files. Like say you declare some init/start/stop methods, and polylith would wire your component into a system for you in a project or base.
Unless I’ve missed it, it doesn’t look like there’s anything like that in the project at the moment?
But I’ve seen some libraries do similar things and I’m not convinced it’s such a bad idea… https://github.com/furkan3ayraktar/polylith-clj-deps-ring
@U0CJ8PTE1 I guess "component" is quite an overloaded word... Polylith is an architecture, not a framework tho'...
It’s not the same thing in the library example above. The component there actually starts/stops a HTTP server, it is not instantiating the component. Polylith components are just contracts with function definitions. You can use any kind of state management library on top of it.
Performing a bit of necromancy on this thread 🪄 • Sean, does that mean that interfaces which define protocols as a common language have to both define that protocol? I still don't see how I can have one protocol and two different implementations with different dependencies as different components, unless I extract that protocol to another component. A trivial example would be very much appreciated • On the same note, where do you manage the Component (framework not poly) system map construction? In a base? Bases are meant to expose a single API, but if my code has more than one API, would that entail creating a base which assembles them? • Vemv, any reports or experiences you can share 3 months after this question? I feel a certain lack here, that even if I define the same interface for two components, I have no guarantee or means to enforce that these interfaces' implementations won't diverge, unless Polylith enforces it and I missed that
@UK0810AQ2
• If multiple components are intended to provide implementations of a protocol then, yes, it needs to be in its own components, along with any default or common implementations (such as nil
/`Object` etc). Components are "cheap" so having more of them isn't a problem. We're up to 39 so far and we've only migrated about a third of our codebase.
• For Component, it depends. Mostly the assembly of Components into a system map is a "base" concern yes, although we have a core Component tree that is reused by nearly all of our apps, that aggregates commonalities like caching, environment/tier handling, host services (hostname, cpu/heap data), email handling, datasources, etc. Those have nearly all been migrated into their own (Polylith) components, although "application" Component which is the common core tree has not yet been migrated (it's on our roadmap).
• Polylith does indeed check the swappable components' interface is the same -- I don't know how deeply it check, @U1G0HH87L would probably have to answer that?
Everything in and after top-namespace.component-interface.interface
is checked, e.g. top-namespace.component-interface.interface.stuff.more.stuff
which means that all components that implements the top-namespace.component-interface
will have to implement all function that lives in all sub namespaces in the following interface
and its sub namespaces.