Fork me on GitHub
#off-topic
<
2023-11-25
>
Jason Bullers21:11:42

I was just listening to this podcast by @ericnormand https://ericnormand.me/podcast/why-does-stratified-design-work And I think the reason it works to separate writing the general thing (e.g. the multi-map) from the specific is exactly the whole "decomplect" argument. As another example to the one Eric gave, say I'm building something where I want to track unique students. I could do that in one step, but now I have to think about collection semantics at the same time as I think about student semantics. If I take those two things apart, now I have two simpler problems that I can solve separately: how to I collect unique elements, and what does it mean for two students to be equal. I've had a similar light bulb moment as I worked more with common HOFs like map and filter: it's so much simpler (once you "get it") to think about writing a transformation of a single element or a predicate for a single element. Then you put those together in a little pipeline. The alternative is to solve the problem at the collection level, but now you're likely to mix transforming with filtering and get things all twisted up. Even if you don't really need to reuse those smaller functions, the taking apart is just so much nicer for reasoning and testing. Thoughts?

2
❤️ 1
adi02:11:39

A concrete example you may like: https://www.evalapply.org/posts/clojure-mars-rover/ I wrote the post to illustrate "design in the small" with Clojure. It draws from lessons about "stratified design" taught in SICP book.

👀 1
👍 1
adi02:11:46

The meta-lesson is that the design approach can apply to a small program as well as to a large system (e.g. each layer is a service).

adi02:11:20

I also just found this "AI memo" by Abelson and Sussman. "Lisp: A language for Stratified Design" https://dspace.mit.edu/bitstream/handle/1721.1/6064/AIM-986.pdf

👀 1
gratitude-thank-you 1
ericnormand14:11:09

Not to self-promote, but simply to add to the discussion (that's why I recorded these after all). https://ericnormand.me/podcast/lisp-a-language-for-stratified-design a deep dive into the above paper https://ericnormand.me/podcast/what-is-missing-from-stratified-design a reaction I realized later https://ericnormand.me/podcast/what-makes-some-apis-become-dsls more about the closure property

ericnormand15:11:34

I still think there's something mysterious going on. How does one split a problem apart so that the two parts are easier to solve? That's just separation of concerns. Where does the huge savings come from?

Jason Bullers15:11:55

Clarity of focus? I think we've all got a pretty intuitive understanding of divide and conquer, or going the other way, a sense that the whole can be more (complex) than the sum of its parts. I guess you're interested in something a little more rigorous than just hand waving?

Jason Bullers15:11:11

In general, that's one of the hardest parts of divide and conquer: sure, it's good to do. But divide where? Divide how?

ehernacki10:11:07

@ericnormand perhaps the direction you might be looking for has to do with cognitive capacity/overhead, i.e. being able to fit the problem in one's mind?

Ludger Solbach20:11:09

A big saving also comes from reuse. If the solution of part of the original problem is really general, it can be reused in different contexts. Transducers spring to my mind, which even abstract the higher order functions from the collections they are working on. That's real reuse at work.

👍 1
Ludger Solbach20:11:13

And way better than the proposed reuse of APIs or components in OO space.

Jason Bullers21:11:50

Potential reuse is definitely a win, but I think what Eric is interested in, and what I find interesting, is that there's value independent of reuse. Like, even if you don't reuse it at all, there's value in separating out the problem. Reuse might actually complicate things a little bit, because then the more "general" part needs to be that much more robust.

Ludger Solbach22:11:34

Robustness helps in reuse as well as a second or third use case to generalize from. And good generality is valuable. On the other hand I don't think that divide and conquer is the correct metapher necessarily. For me that is more conotated with dividing a big problem into smaller problems of the same kind, as in quicksort for example. The kind of division we're talking here has more to do with orthogonality of the design. This is what makes the parts simpler , because you don't have to think about two problems at once (simple vs. easy). This is what may lead to composition and resuse in contexts, where the complected design could not be used.