Fork me on GitHub

What are your thoughts on vertical slice architecture versus clean architecture? My feeling is that vertical slice is about avoiding n tier architecture while hexagonal/ports and adapters/clean et.c. is about isolating the domain from external dependencies and external code? What are your thoughts?


After a conversation with a friend yesterday, I have a whole talk on this cooking. I hadn't heard the name "vertical slice," but more or less, yes, you should be thinking of individual, actual code paths over arbitrary, horizontal slices.


I don't love the name "vertical" slice, because it's not about imaginary slicing. The reality is you have a code path that is going to execute. You'll have the best outcomes when those code paths are isolated and flat (i.e. not deeply-nested via stack frames or call backs).


But then it doesn’t sound like they are mutually exclusive.


You can both focus on the "vertical slice" and have a bucket of shared functions that effectively constitute a "horizontal slice."

Samuel Ludwig16:08:19

The term that's used to describe this kind of "vertical slice" architecture in the Pragmatic Programmer, is "Tracer Bullets". And is mainly described (at least how I read it) as a way of developing some kind of scaffolding that you can latch onto as development on other features continues. It forces you to make a decision like "whats the name of this database/where is it", "whats the endpoint for my API" and stuff like that.


I think another good take on this is micro front ends. The rest of the ‘slice’ then takes care of itself. But I’ve also fantasised many times about the concept of append-only codebases. What would happen if apps at the highest level were open to extension but closed to modification? I’ve tried a few times to structure projects at the top level around ‘concerns’ which are just your cross cutting components like data access etc, and ‘features’ which use existing abstractions and hooks to add new functionality into the app. What’s nice is it guarantees that each piece of work is basically self documenting because it comes with all the changes and functional tests alongside. You’re also never scratching around in git logs asking why something exists because everything is explicitly bound to a feature. Sadly while this stuff is attractive to engineers, designers tend to extensively interleave functionality in front ends which makes it really hard to keep pace with abstractions to hook into independently.


If I understand the pattern, assuming we have an API driven service for example, it seems to say, each API is an independent code base. Maybe it exists in the same code base, but there should be no shared dependencies between API code for example. And this might extend all the way to the DB. And maybe this is an extreme take, after all, if two APIs work to modify a User entity, maybe they need to affect the same DB object. So probably it's that you try to minimize that kind of shared resources as much as possible.


So getOrder and getOrderDetails would have completly different implementation, DB, etc. from head to toe. No shared DB, no shared DB connection, no shared model, nothing. You could almost have two different devs implement each in parallel without ever talking to each other or using each other's code.


Do I get that right?


@U0K064KQV I have not talked enough with people that are deeply familiar with vertical slice culture. My understanding of clean architecture is through a FP mindset is pure/impure taken into an approximation of what that could be in say Java.

apbleonard23:08:42 This talk by the wonderful Scott Wlaschin gives very compelling case for vertical slicing given its simplicity, testability and deployment benefits - though he gives it a better name "transaction scripts - reinvented". He also makes the bold claim that "FP style transaction scripts are the natural evolution of Clean/Onion/Hexagonal Architecture."


@U3TSNPRT9 So you get a convergence of the two. I agree that there is no need to put clean/onion/hexagonal opposed to transaction scripts. We have also the claim that Clean/Onion/Hexagonal is the same as Functional Architecture


What is rarely discussed when comparing FP "functional core, imperative shell" to clean architecture is how to handle cases where IO depends on logic branches. Many dismiss this as a code smell. If this is needed your beautiful functional tests may end up needing to mock - i.e. checking that an (impure) function gets called with the right args etc, which isn't so beautiful anymore. A way round this is "dependency interpretation" where your logic returns an interim "result" that expresses in data that it is "missing information" and an outer layer can try and lookup that data and recurse (i.e. hand it back to the logic) for example - or just stop there with an exception and hope more information is provided the next time the API is called. This is similar to the approach here: This makes the tests very functional (no need for mocks, could use property based testing), but does mean the overall flow could be harder to reason about (could loop forever with bugs etc.)

💯 2

Slices and layers are not mutually exclusive. Ultimately, both layers and slices will emerge keeping to a simple principle: maximize cohesion within modules and minimize coupling between modules. Organize code by dependencies not by names . For example, only controllers are able to talk to the model. This is an artificial constraint forced on by naming not by code dependency. Following minimize coupling and maximize cohesion, both layers and slices will emerge in the dependency graph . Not following this principle will result in a spaghetti dependency graph. A trap that results in spaghetti dependency graph is to think you understand what the dependency graph looks like before writing any code at all by following MVC or whatever pattern is in fashion. Instead keep entropy low by lumping everything together in one module at first. Increase entropy by introducing another module only when the module gets too large. This minimizes entropy in the module but ironically increases entropy in the over all system which is what exactly what living biological systems do. This is fine and expected because this is how structure form. The increase in entropy is beneficial as long as the coupling between the modules is minimized . Increase entropy in the overall system is the cost of minimizing entropy within modules. I call this the thermodynamic technique of system design 😁 The earth radiates as much energy as it absorbs from the sun but increases entropy of the universe by 20 times (number made up) but in doing that it creates pockets of low entropy called life The universe started out at the big bang in a state of low entropy and will end in the heat death at maximum entropy. In between those two states are where interesting structures, like galaxies and humans, form. Incrementally adding another module to increase entropy in the system as needed let's another principle of physics to come into play: the principle of locality which says changes that happen locally will not have a direct and immediate impact in something far away like "spooky action at a distance" . Entanglement violates the principle of locality which is like global symbols. Changes in global symbols have an immediate effect everywhere that depends on that global. The principle of locality allows things inside a module to evolve with little or no dependency on things outside of the module. So what is a module? It can be implemented as a namespace or as a closure or as a data structure


From what I understood, any shared modules would be considered a layer architecture. In order to have slices, there'd be no dependencies between slice. I guess this might be impossible to achieve, so maybe it's more about tending towards and minimizing shared dependencies. For example, all slice that talk to the DB would need a DB connection and depend on next.jdbc