Fork me on GitHub
#clojuredesign-podcast
<
2023-11-24
>
genekim18:11:00

@neumann I loved your series on composability in the large — I wanted to share one aha moment I had that came from @ericnormand on a similar theme. This episode blew me away, then he talk about a combination of functors and algebraic thinking. For me the big aha moment is the massive simplification of when you have a series of functions that all take an A, and return an A — and how you can write a video editor using that style, along with algebraic operations. I recently had to figure out why my book sales rank tracking application broke (AGAIN!!! When I needed it most!), and found again a mess in my code, written mostly 3-5+ years ago. It was due to a couple of things: • not doing a great job writing unit tests, because I had database calls in my tests (which you covered in your recent episode about components) • One side effecting operation in a middle of a ->> pipeline — fix was to break it into two parts, which nicely separates “gather data” (input side effects in the beginning) from “store data” (output side effects at the end) steps. • A surprising one: where I changed the shape of the data in that first “gather data” phase — where it goes from a sequence of maps of books to be scanned, to one where it became a single map of successful and failed book scans. The last point was surprising, because until that moment, my mental model of what the shape of the data was proven very wrong. Most of my functions were glorious functors, taking a sequence of book maps, and adding more values to each map. (Your bag of data pattern) And then I changed the shape in a surprising spot, where I separated those sequence of book maps into a single map containing two piles: whether sales rank retrievals succeeded or failed. To me, this was breaking composability, which definitely broke the “principle of least surprise.” In other words, composition is greatly aided when you preserve the shape of the data, and have these functor like properties. (In fact, just writing this out makes me realize that this transformation may be better in a separate phase, so I can test it independently of side effects.) At any rate, wanted to share this story, tapped out on my phone while it’s fresh in my head. Keep up the great work!!! https://ericnormand.me/podcast/3-examples-of-algebraic-thinking

chef_kiss 3
💡 2
1
neumann00:11:11

@U6VPZS1EK Thanks so much for sharing this! I'll take a look at Eric's video. I like that described the distinction: "gather data" and "store data". As for your data pivot. I'm super curious how your single map is structured. Is it something like {:success [...], :failed [...]} something else?

neumann00:11:54

Sometimes @U0510902N and I like to talk about "asking questions about the data". Here's a toy example. Suppose I have a sequence of maps, where each map is the information for one book. Eg.

{:book/title "The DevOps Handbook"}
And some process that fetches the "stats" for the book and enriches the map with information:
{:book/title "The DevOps Handbook"
 :stats/success? true
 :stats/sales 42}
Perhaps some kind of fetch function like so:
(run! fetch-sales books)
Let's assume that function doesn't throw, but notes the error:
{:book/title "The DevOps Handbook"
 :stats/success? false
 :stats/error ...some Exception...}
Now you can "ask questions".
(defn all-succeeded [books] (filter :stats/success? books))
(defn all-failed    [books] (remove :stats/success? books))
Or if you want both:
(let [{succeeded true, failed false}] (group-by :stats/success? books)
  ...
  )
The idea is that you can have that canonical representation (seq of maps), but pivot it at the point of use (the group-by example.)

neumann00:11:39

I'm not sure sure if I'm tracking with what you said. Let me know when you have a moment.

neumann02:11:28

@U6VPZS1EK A couple comments on the video. Eric talks about combinators, which I find to be very nifty. It didn't seem like he dwelled on it much in the video, but all of these operations (parts of the "algebra") are higher level. You have a function that takes two other functions and produces a new function that combines the behavior. They idea is that you are building up a complex behavior via these relatively simple parts that combine. Parser combinators are a classic example. I do think the key to the concept is that it is higher-order. That can also make it hard to understand. It isn't using comp to put the functions together. It's like you're writing special versions of comp for that domain. For example, if you have a function space that can recognize a single space, you might have a combinator many that can take a recognizer function (like space) and recognize any number of them in a row, so (many space) will return a new function that can take an input a recognize any number of spaces in a row. Eg:

(let [all-spaces (many space)]
  (all-spaces input))
You end up building up a mini-language of higher order functions.

neumann02:11:00

A similar, but different concept is what @U0510902N and I talk about when we refer to "building up the language". In that case, we're working with something more like a reducing function. A function like (A B) -> A. Eg. A function that takes a "state" and an "event" and produces an updated "state". This is not a higher-order function. It takes two pieces of data and produces a new piece of data. However, you do get similar benefits as you "build up the language" of that data -> data transformation system. You can build up different operations. A great example of this is are the https://github.com/seancorfield/honeysql. They all take a query data structure, operate on it, and return an updated query data structure.

💡 1