Fork me on GitHub
#architecture
<
2023-11-09
>
Samuel Ludwig14:11:00

I've been watching some Sussman talks recently, and have previously read a little bit of his Software Design for Flexibility. I've been trying to meditate on some of the ideas regarding system-building as "Layering", and building 'organically-flexible' systems, but its all still very loosy in my head. Has anyone experimented with building systems this way? I know Sussman has cited Emacs as an inspiration/incarnation of this idea, but I'd be very interested in other experience reports

ehernacki14:11:07

I try to do this instead, especially when working with FP languages: https://www.youtube.com/watch?v=1FPsJ-if2RU

Max16:11:27

@U03HHT2F9S4 do you know of any practical examples of designing a (modern business web) system with that approach? The idea is interesting but of course the devil is in the details. My first thought is that youโ€™re just shifting the complexity to the integrations between your services/actors/processes/objects

tengstrand16:11:47

I think he is right when he says that you should be able to delete and replace code easily. In his example he says it should take one or two days to rewrite something, and not 12 months. Most services takes more than two days to rewrite. But if every service only contains (let's say) 300 lines of code, each of them would be pretty easy to rewrite (hopefully). But if you have 300.000 lines of code, then you need 1000 services. Now you have a bigger problem! What I'm trying to say is that you need smaller building blocks, that are just code, like components in https://polylith.gitbook.io/polylith. So he is almost right, it's just that a huge amount of running processes will not make you system simpler.

Samuel Ludwig16:11:07

I've been tossing around the concepts of layering with polylith actually, trying to marry the two in my head to get a better understanding :^)

polylith 1
tengstrand17:11:55

It's pretty impressive that Greg Young can sit on a chair and talk non-stop for almost an hour. I wonder how much time he spent preparing for that talk.

ehernacki17:11:41

Well, I can say that for non-Erlang/Elixir languages, I tend to use a simple implementation of Hexagonal Architecture without caring too much about layers - but caring about doing TDD (please read TDD as Example-Driven Architecture)

chrisblom21:11:39

Software Design for Flexibility goes far beyond organising code into reusable blocks. IMO it's not about reuse of sections of code, layering is about redefining fundamental concepts, so that existing code keeps working, while enabling more possibilities. The chapter on math expressions demonstrates this very well.

Samuel Ludwig21:11:50

would love to hear about any experience if you've had some playing with it!

chrisblom22:11:27

On the MIT site the book is described as being about Strategies for building systems that can be adapted for new situations with only minor programming modifications. I've never directly implemented the ideas, but there were several occasions were I came up with a solution inspired by the book, for example: i was working on an ecommerce system which already supported ordering integer quantities of some product (e.g 4 apples), and it needed to be extended to support ordering weighted items (example: 4 packs of 150g of meat) The implementation was in kotlin, and used Ints to represent quantities. There already was code with lots of arithmetic for processing orders, generating reports etc. I was able to replace Int with an abstract Quantity interface, with 2 concrete implementations Pieces(quantity: Int) and WeightedPieces(quantity: Int, weight: Int) By defining suitable implementations for +,*,/ for Quantity I was able to treat both cases as numbers, and was able to reuse almost al of the existing code. (This is an example of redefining a concept: stock of a product was an Int, and was redefined to be a richer type, while most of the code kept working) Another use case was when i created a build and deployment tool in clojure. The initial version was a big clojure expression that would build jars, put them in docker images, deploy them to AWS etc. The real example was much bigger, but in spirit it boiled down to something like this:

(defn deploy [revision env]
     (let [git-repo (checkout-git-repo revision)
           jar (build-jar git-repo)
           docker-image (build-docker-image jar)]  
       (deploy-to-aws env docker-image))
Running this for each env proved too slow, so I introduced a macro based system that was able to infer the call graph, inject caching and change detection by comparing the function arguments to the arguments of the input sin the previous run. All we had to do was replace defn with our custom defn-build-step in the code. So for example, with this system, calling (deploy "v1.2" "prod") the first time would build the jar, docker image, and deploy it prod, calling it the second time would skip building the jar, and reuse the previously built docker image. Here also I was able to reuse lots of code be redefining a concept, in this case replacing normal functions with caching / change detecting functions.

1
chrisblom23:11:00

This is IMO much more interesting than Hexagonal Architecture, Polylith etc, which are just systems to organise sections of code in order to make parts more reusable. Software Design for Flexibility is about another type of reuse: building systems where you can flexibly change how existing code works, so that you minimize the need for rewrites when requirements change.

tengstrand04:11:39

> IMO it's not about reuse of sections of code, layering is about redefining fundamental concepts, so that existing code keeps working, while enabling more possibilities. Having small decoupled components really helps with this. You often create new components that you just add to your existing codebase, without affecting other components, very similar to how you add a new functions without affecting existing functions (except that you may use it somewhere). > This is IMO much more interesting than Hexagonal Architecture, Polylith etc, which are just systems to organise sections of code in order to make parts more reusable. Components are not just about reusing code. They are designed to be easily replaced with other implementations of the same interface (a component will also define an interface, which can be implemented by other components). So with small replaceable blocks of code (components) a very good foundation is created to be able to replace parts, without affecting surrounding code. Using Polylith components helps you a lot with building systems that can be adapted for new situations with only minor programming modifications. I see that all the time when using Polylith. For example, in the current release that I'm working on, I was able to build two different command line tools, the existing poly tool and polyx. Polyx can do everything that poly can, but it can also generate images. So I created an image-creator-x component with the interface image-creator and included that component in the new polyx project (together with the other already existing components). I also created an image-creator component that implemented the image-creator interface, and used it in the poly project. The image-creator-x creates an image, while image-creator just prints out "Creating images is not supported by the 'poly' tool. Please use 'polyx' instead". The reason we didn't want to include the image-creator-x component in the poly tool, was that it uses Java AWT and that there were issues reported for Apple silicon. But because components supports polymorphism at build time, it was easy to solve this problem. @U0P1MGUSX

chrisblom10:11:04

Sure reusable components are great and all, but the Software Design for Flexibility book covers different types of flexibility. Have you read the book?

ehernacki10:11:54

@U0P1MGUSX would you mind listing these different types here in a short manner?

chrisblom10:11:12

I've given 2 examples above

chrisblom10:11:27

You can read the book if you want to know more

ehernacki10:11:00

@U0P1MGUSX I find that the "go read the book" argument it's very close to being like a religious zealot

chrisblom10:11:31

Well the op want to discuss the ideas in a book. it's hard to discuss things if people don't read the book. I can't really explain the ideas in a few lines

ehernacki10:11:05

> I can't really explain the ideas in a few lines This is pretty fair. My problem is the use of the aforementioned argument instead of being direct like this

ehernacki10:11:12

(I used this argument myself in the past)

chrisblom10:11:21

My comment was meant more as a book recommendation than a 'read the manual'

tengstrand10:11:03

Increasing changeability is also a way to reduce complexity, which I try to explain in https://itnext.io/the-origin-of-complexity-8ecb39130fc blog post. I haven't read the book unfortunately.

ehernacki10:11:25

Usually when I am in a situation closer to tell another person to "go read the book", there are two things I can recognise here: one is that in my case there can be a bit of infatuation involved and the other is that I didn't yet understood it well to the point to explaining these ideas in a simple manner

ehernacki10:11:51

Interesting @U1G0HH87L, for I believe that what you mean by "Increasing changeability", I call "maintaining leverage/open options" ๐Ÿ™‚ I'll read the post though

๐Ÿ‘ 1
chrisblom10:11:06

sure, but I already tried to explain what to book is about in a simple way earlier in the thread, and also gave 2 examples,. If my explanation is not clear, and people want to know more, i'd recommend reading the book, it's really good.

ehernacki10:11:34

I get that @U0P1MGUSX, and your examples mention some qualities that are indeed valid for consideration in some use cases. But how are these arguments going against things like Hexagonal Architecture and Polylith? Also, what is meant by Flexibility? Do these things belong to the same domain or they are complementary but from distinct levels/contexts?

chrisblom10:11:35

It's not an argument against Hexagonal Architecture or Polylith at all, it's just that this thread was opened to discuss Software Design for Flexibility, which is about different topics, so Hexagonal Architecture or Polylith are off topic here IMO

ehernacki10:11:05

Is showing alternative perspectives off topic?

ehernacki10:11:17

Anyway, I can get what you mean

chrisblom10:11:22

I'd say it's complementary. Polylith/Component/hexagonal architecture etc are about defining and combining components. Basically: if have this bag of components, how can I wire them together to build systems, or replace components within an existing system. The books discusses ways of reinterpreting a component to use it in new ways.

ehernacki11:11:24

@U01EB0V3H39 I forgot to reply to your question. I have an example that I'm doing right now: I am implementing a Tenant "microservice" for my company's SaaS product. It has two concrete use-cases: (1) registering a new tenant using the input information and comparing/enriching it with an external Identity Provider, and (2) provisioning the required AWS Infrastructure resources for the respective tenant. I am using Rust with a rough "Hexagonal Architecture" approach, where each use-case is implemented as a simple function using TDD (read: I drive the development of each use-case based using an example - i.e. test code - to measure if the expected functionality of this use case is met). I know that my "Tenant Registration" use-case function depends on two external things or "ports": one for the Identity Provider implementation and another for storing the tenant data locally in a repository. At first I create a rough "in memory" version of these as an "adapter" which helps me to test the integration with the other parts of the system, then later I implement the "adapters" we're actually going to use for the Identity Provider (e.g. Okta) and the data repository (e.g. DynamoDB). This allows us to migrate between databases and identity providers if needed within a week if needed, because their boundaries are clearly defined and measurable (via the tests used to write the functionality in the first place). At last, once I have the use-case code (with its related entities as well), I build the REST API resource responsible for converting HTTP requests to this use-case code employing the Okta and DynamoDB "adapters". If I need to change something not fundamental in the system - like infrastructure components - this is well isolated in the bundaries of the "adapter" implementation in question - with waaay less cognitive overhead, allowing then for "deleting code". In this, I don't try to avoid the complexity, I'm trying to isolate it as best as I can. I don't claim it is perfect though, I'll see with time if there's other places I may have coupled things together ๐Ÿ™‚

ehernacki11:11:31

Another, more general examples, are apps written in Elixir/Phoenix (which run on Erlang's BEAM) that not only foster small software parts, but encourages the use of what's called OTP in the Erlang world: https://serokell.io/blog/elixir-otp-guide

๐Ÿ‘ 1
chrisblom11:11:24

One more example came to mind, where the flexibility as described in the book would have helped: I was working on a system using OptaPlanner to generate and optimize work schedules for nurses. There a lot of laws regarding how long they can work, so we implemented these using OptaPlanners rule language, which is basically a relational language for constrain optimisation you can define using a java api. So, we delivered this product, but sometimes it was not possible to find a schedule that conformed to all the rules, and the client wanted to know when and where the rules were violated. Optaplanner has some support for this, but is was to limited, so we had to basically implement separate code to point out all the violations for all the rules, even though we already defined these rules already in the constraint optimisation DSL. If this DSL was open for extension the way it is described in the book, we might have been able to write a new interpreter for the DSL that pointed out violations in schedules, in addition to the existing interpreter offered by Optaplanner that uses the DSL to optimize schedules. This would have made it possible to reuse code, by changing the way these rules are interpreted. Optaplanner, being a Java project, doesn't really offer the flexibility to inspect the DSL in a good way.

ehernacki13:11:58

Note that in my example described above I didn't need this kind of flexibility, for I kept it boring and dumb.

chrisblom14:11:32

yeah it's not something most projects would need, but some do

๐Ÿ‘ 1
Max14:11:24

@U03HHT2F9S4 do you, as the talk describes, find yourself actually deleting and rewriting functions rather than modifying them in place?

ehernacki14:11:30

Currently not because these use-cases didn't change yet, but I can if I want to

Max14:11:53

I mean in general

ehernacki14:11:15

It depends on the change

ehernacki14:11:14

But for an example, yesterday I modified it in-place for replacing an unmaintained http client that an adapter used

ehernacki14:11:30

Which was just about changing some symbols

apbleonard13:11:55

Slight tangent but I'm haunted by this quote "Devs like writing reusable code, but dislike reusing code." I've had little success getting others to stick to designs I felt were reusable in the senses used above. The temptation to write code free of someone else's design constraints to meet a specific need today is too great, even if it means more expensive changes tomorrow - particularly when the design effort is invisible to the customer. (Spoiler alert: our changes now are super expensive.)

๐Ÿ’ฏ 1