Fork me on GitHub

Does anybody have experience with using DDD style aggregates in a production system? I would be extremely interested in how it worked out for you and if you would recommend it. It feels like a lot of people are either doing SQL more or less directly from the handlers or using a kind of repository pattern with approximately a repository per table. I might be mistaken on this though. I fully understand the pragmatic reasons to do the former and the tradeoffs for this approach are IMO less severe in Clojure. It's how we do it as well and it has worked well for us so far. I guess DDD aggregates are close to repositories but you'd have only very few of them and the interfaces would also be smaller. You basically only have load and save operations and you do not update individual fields via the repository, right? You do changes on the map and throw it into the function for saving it. I wonder whether it can be done, and whether it can work well. I see potential performance issues because with this approach there is a lot of overfetching but are there other problems that are not obvious?


I’m moving towards DDD aggregates for parts of my applications that aren’t CRUD-y, and where I want to maintain invariants that span multiple entities or value objects. It’s true theres some overfetching/-writing, but simplicity of code and concepts is more important. I’m not a seasoned DDD practitioner though, and sometimes these terms are not well defined or understood the same way by different people, so we might not be talking about the same thing. What I have is a .repo ns for my aggregate or entity, which is what handlers and domain logic use. This ns is the interface, and in it there is a focused set of functions in the domain language (ubiquitous language in DDD-speak). The functions are often spec’ed. This is a boundary, after all, between the domain layer and the persistence layer. The functions call implementations in the db layer in a different namespace.


Thank you for sharing! I like that you're mentioning invariants, because this is an important benefit of centralizing data access which is difficult to get right when executing statements directly in handlers, especially if you want to avoid coupling. Do you atomically save the complete entity or do your repos provide functions to do smaller changes to them too? I assume the functions that store an aggregate can become quite complex if you want to avoid just updating everything no matter if it changed. Overfetching sounds less like an issue than over-writing because the latter can more easily affect the underlying database negatively. It always sounded to me like this approach could lead to simpler systems. It's nice to hear that it's working for you 🙂 Why do you make an exception for the CRUD-y parts if I may ask? I'm also not sure if I know what you mean, can you share an example of something you wouldn't model as an aggregate?


I tend to think about my entities as state machines. Not that I actually use a state machine library or anything like that, but conceptually entities have a life cycle, and there are different operations that are available to entities depending on their state in the life cycle. So the repo functions typically implement the operations that move the entities from a state to the next, and saves the data that is relevant in this state. So I guess that’s the “functions to do smaller changes to them” that you ask about. Invariants are enforced by a combination of domain logic, specs and database constraints. With the state machine-y entities in mind, I can provide CRUD interfaces to entities in a particular state. For example, an unpublished ad can be created/updated/deleted without much ceremony and domain logic. It’s when publishing that data must be validated, invariants enforced, domain logic executed et cetera. Does that make sense?


Yes that does make sense. I worked on a project in the past where we had a similar mental model but also without making the state machine explicit. We meant to add it but it kept getting postponed


There’s some experimental datafy/nav stuff in next.jdbc which enables lazy loading while maintaining the feel of pure data. That or something similar could help with the over fetching. In terms of writing too much, there’s nothing stopping you having some sort of session/unit of work mechanism (perhaps with watchers and validators over an atom or something) to track only the minimal update required.


Every time I read about DDD aggregates, I love the idea in terms of app level simplicity, but the thought of potentially writing out an aggregate with several 1-to-many relationships was always quite off-putting, especially if all you needed to do was update a single field or two on the top level entity. This especially gets hairy when you take into account concurrency in your relational DB. Most people don't run at a "serialization" isolation level (with needed retry logic), or even bother to think about taking a row level lock on the top level entity. Which is to say, people would open themselves to consistency anomalies if they implement this carelessly.


Surely forcing every operation to synchronise (through optimistic concurrency or otherwise) on a single top level aggregate row during each transaction is simpler and less error prone than the alternative? Multiple clients wanting to fiddle with the order lines of an order and then trigger some discount/shipping logic seems much safer when clients can only address the aggregate order. Obviously in both cases you can just forget to do that and mess things up but seems like less cognitive overhead when everything is part of an explicit whole.


I can't argue with the less cognitive overhead aspect, and perhaps my fears about re-writing back lots of FK relations is overblown in the grand scheme of things 🤷


• Save the time (and space) for Computer? • Save the time (and space) for The employer? • Save the time for the employee? • Save the time for the Consumer? There are some trade-offs

There are ways to get there
If you care enough for the living
Make a little space
Make a better place
Heal the world
Make it a better place … …  
❤️ clojure-spin ❤️ 🙏 😄😄😄


> There’s some experimental datafy/nav stuff in next.jdbc which enables lazy loading I never looked into those, I'll have to read up on this and the implementations. This sounds really convenient and would make an implementation of this a lot easier and nicer to use. But it does make the core impure on the other hand and it'd be nice to never have to worry about integration point issues in that part of the system. That's a good point about overwriting, that should actually be quite easy to solve in Clojure. Maybe diffing would work too if you want to stay as pure as possible. I haven't thought about a possible implementation so muhc > but the thought of potentially writing out an aggregate with several 1-to-many relationships was always quite off-putting Yeah true, I can see issues there if stuff is shared amongst root entities. I guess that depends a little on the domain. I would have assumed that this style of partitioning the database should prevent some concurrency issues but didn't consider that larger units can also lead to larger overlap. But we need to consider the required isolation anyway in handlers even without aggregates, maybe it's at least no worse in the end. I think thom's suggestion about synchronizing on the root aggregate should cover most cases. > There are some trade-offs Heal the world metal


Yes, quite some, not with Clojure through. Axon Framework, together with Axon Server, helps to build aggregates from events, and any command for the same aggregate will get routed to the same application. Writing events also uses optimistic locking in addition. You can also use it without Axon Server, and use a relational database instead. It does is annotation heavy, to make it easier with Clojure it would be nice if there was some functional glue code.