This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (1)
- # architecture (14)
- # asami (21)
- # babashka (1)
- # beginners (44)
- # biff (6)
- # calva (24)
- # clojure (16)
- # clojure-europe (12)
- # clojurescript (32)
- # cursive (23)
- # datascript (5)
- # honeysql (8)
- # hyperfiddle (1)
- # malli (1)
- # nextjournal (34)
- # nrepl (4)
- # off-topic (64)
- # re-frame (12)
- # reagent (1)
- # releases (2)
- # reveal (41)
- # shadow-cljs (137)
- # spacemacs (4)
- # xtdb (5)
@ramblurr what is it about your system that makes it hard to test? It sounds like all of your side-effecting functions are pure-ish in that they take the instrument by which they perform their effects as an argument, so what prevents you from replacing the impure parts of the system map with test mocks? Or are tests that use mocks not sufficiently representative for your use-case?
For a typical app and db queries specifically, I find it relatively easy to set up a test in memory version of the database and test against it - it's much more useful than trying to mock this stuff Of course it helps if domain logic is separated into pure functions so you can test th separately I think it's foolish to create these “repository” protocols - huge overhead and no net gain. Protocols are better for stable things with small number of operations
Yeah, I read that article and my first thought was: that just doesn't scale up -- it might be great for small stuff but you can't write an app like that when it deals with 200 database tables and when you have lots of multi-table joins for stuff...
+1 to not using the protocol approach on db queries or “repositories”. In typical apps, the database plays a special role, so it makes sense to treat it differently from the other side effects
Why not create a protocol for all the actions in an application which "touch" the database? Those actions with have "domain" names anyway. Putting them in a protocol just adds another degree of flexibility, no?
The issue is that even though creating protocols for interacting with the app state does help separate "what" from "how", it does so in an inflexible way. Adding new entities, properties, actions, etc. each require new methods ands implementations. I think this is why it's common to reach for tools like SQL, GraphQL, EQL, pathom, etc that also separate "what" from "how", but in a much more flexible way.
@ben.sless What's it going to dispatch off? Are you going to create a
defrecord for every single table as well? Is it going to dispatch off the "database"? A
defrecord for the database? Or dispatch off
DataSource or what? That just doesn't make sense to me.
One protocol with hundreds and hundreds of methods, one for each action you need to take against the DB? That's... insane... and not idiomatic.
I think what I'm missing here is some level of abstraction for talking about relational data without tying it to SQL and the particular db implementation. In a way honeysql is a step in that direction
What we do is that we have functions that return and modify honeysql queries. So for example you might start with a functions that is a glorified
select * from foo and then a set of functions that can add more columns (including the correct joins), filter based on permissions, filter based on search/filter parameters and so on.
But for anything that's DB specific we don't test them, we rely on integration tests that actually test against our external API endpoints, with real data in a real database. There's nothing to test for a data-producing function, really.
The authors motivated pain point was that it's hard to compose sql, e.g "get-x-from-blah" depends on "get-blah-by-id". That's because sql has no data based api layer. The creator of onyx has a great talk on this https://youtu.be/kP8wImz-x4w From this slight mis characterization of the problem, they launch down a path i think they didn't properly motivate and so it's easy to debate because we can talk past each other by each assigning different weights to the attributes in the tradeoffs. They end up talking about decoupling the db connection, but I'm personally unsure if that's a good tradeoff. If it is, then i would have the transaction function close over the db client. I think the datomic api makes this option very clear. (D/tx connection query) https://docs.datomic.com/cloud/transactions/transaction-processing.html Software patterns are so often signs that an underlying platform is overly coupled. Imo, Rich architects by pulling things apart, within reason, to give you the tools so you can choose how to put them together. As an aside, I'm irrationally upset onyx didn't see wider adoption.
> SQL has no data based layer > THIS. It's the "original sin" which doomed us to everything from SQL injection to string mangling