This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # architecture (19)
- # beginners (3)
- # calva (2)
- # cherry (2)
- # clj-kondo (2)
- # clojure (58)
- # clojure-europe (15)
- # clojure-norway (1)
- # conjure (1)
- # data-oriented-programming (9)
- # data-science (10)
- # emacs (9)
- # hyperfiddle (1)
- # nbb (9)
- # off-topic (1)
- # shadow-cljs (24)
- # sql (14)
- # squint (58)
- # testing (13)
- # xtdb (10)
While making functional decomposition of the systems it worth to visualize the architecture. UML class diagrams can be used for datastructures! But I'm curious how do you draw the functions and interactions (calls)? Flowcharts? Blocks and arrows with params over arrows?
If we're not talking about specific implementations, but ideas about visualizing in general, I think Eric Normand's book "Grokking Simplicity" has some good insights to gleam. (I hope Eric Normand and Manning do not mind if I post 2 screenshots in a public forum).
It's always important to ask, what question are you trying to answer via visualization. 1. How do these functions/components depend or interact with each other? 2. Are there ways we can group sets of functions/components by commonality?
One way to answer these two questions is with a DAG visualization which identifies who is calling/depending on what:
The nodes can be considered functions/components; the rows are some way of grouping them together (eg. namespaces/subsystems); and the arrows are explicit dependencies. This way you can visualize: "are we building a system that goes from abstract to concrete?" (arrows should not be going up), "are we missing layers of abstraction?" (e.g. if an arrow is reaching down past many layers downward), "is this depending on too many things?" (too many arrows), "circular dependencies?", etc.
Another visualization is when you consider not the system at rest, but when it is in motion (things are calling other things: concurrency and time are possible culprits)
Here we are vizualizing that there may be things happening concurrently and we need to consider all permutations of when things happen (and we also vizualize points of forking and merging where we can simplify our reasoning and consider as if things were single-threaded; perhaps because results are independent or we can rely on specific concurrency primitives)
Not sure if that's what you were asking for @U0514TE0F - but perhaps it will help spark some solutions to your problem. And if this seems like a good direction, I recommend checking out Eric Normand's book "Grokking Simplicity" - he goes into a lot more detail on this subject :)
Thank you! I'm thinkering about this. The idea is to visualize architecture based on functions/procedures not classes/objects. It's definitely gonna be blocks and arrows - question is how? I will share my version.
I wonder if something like polylith can help this visualisation. IMO seeing all functions can be a lot. Visualising polylith components via interfaces and their calls might be good enough middle ground. this could be some good tooling for polylith https://polylith.gitbook.io/polylith/introduction/polylith-in-a-nutshell .
poly tool has some basic cli visualisation based on deps . perhaps one based on namespaces and interface calls might be worth an investigation
Good morning. When I think about dependencies in Polylith, it helps comparing the bricks (components and bases) with libraries, but as if they lived in one single shared mono-repo without the need of versioning. I think the main reason we sometimes create diagrams out of library dependencies, is because a library graph of dependencies often contains different versions of the same library, and that we sometimes want to sort out why the resolved version creates problems for us. If all the code/libraries were in sync (as in a Polylith workspace) we would not have that problem. A Polylith workspace is a place where we are in charge for how things change over time, which allows us to skip the versioning and instead work against the latest commit and at the same time feel some confidence that everything works together, that all our contracts (component interfaces) are fulfilled, that we don’t have any circular dependencies, and that the tests are green. When all this is assured, a graph will add less value to us. What’s important is instead which libraries and bricks we use. This is why the poly tool just lists the building blocks (components, bases, and libraries) in alphabetical order instead of showing them as a diagram (see https://polylith.gitbook.io/polylith/conclusion/production-systems). Just to be clear. I’m not advocating putting all the libraries in the world in a single mono-repo to get rid of the versioning. This would be almost impossible and not even desirable, and the tests would take forever to run!