Fork me on GitHub
#architecture
<
2024-03-21
>
john18:03:46

I'm interested in folk's opinion on the ideas in https://github.com/johnmn3/af.fect I'm still not 100% that it's a good idea, or if it was already invented and forgotten in the lisp community decades ago, as it's a pretty simple idea. The idea keeps coming back to me though, cause I like to test LLMs by asking them to make a version of affect by asking it to: "create a function that takes a function (the operator) and return a function that either applies its arguments to the operator function or takes a map that can redefine the data passed to and returned from the operator function which returns a new function that can do the same thing as its parent function." Some of them do a pretty good job! I like the problem because it's almost like a quine of some sort. I spruced up one of the answers to create a more simplified version of affect:

(defn mk-static-effect [{:as ctx :keys [static-effect pre af post merge-fn]} args]
  (or static-effect
      (let [res (post (af (pre (assoc ctx :args args))))
            ctx (if res ((or merge-fn merge) ctx res) ctx)]
        (fn static-effect [& args]
          (apply (:ef ctx) args)))))

(defn mk-effect [{:as ctx :keys [pre af post args merge-fn effect]
                  :or {merge-fn merge}}]
  (or effect
      (fn effect [& args]
        (let [res (post (af (pre (assoc ctx :args args))))
              ctx (if res (merge-fn ctx res) ctx)
              new-args (:args ctx [])
              ef (:ef ctx identity)]
          (apply ef new-args)))))

(declare extend-fn)

(defn mk-affect [{:as ctx :keys [affect]}]
  (or affect
      (let [init (:init ctx identity)
            ctx (init ctx)]
        (fn affect [& args]
          (if-not (some-> args first meta (contains? :a/f))
            (apply (mk-effect ctx) args)
            (if (some-> args first :dump?)
              ctx
              (apply extend-fn ctx args)))))))

(defn ctxify [ctx-or-fn]
  (if-not (map? ctx-or-fn)
    {:ef ctx-or-fn}
    ctx-or-fn))

(defn comp-key [k ctx ctxs & [catch-fns?]]
  (let [old-afn (k ctx identity)
        afn (if (fn? (first ctxs))
              (if-not catch-fns?
                identity
                (first ctxs))
              (k (first ctxs) identity))
        afns (->> ctxs
                  rest
                  (mapv k)
                  (filter identity)
                  (concat [old-afn afn]) 
                  reverse
                  (apply comp))]
    afns))

(defn merge-ctxs [ctx ctxs]
  (let [merge-fn (-> ctx :merge-fn (or merge))
        ctx (apply merge-fn ctx (filter map? ctxs))]
    ctx))

(defn mk-fn-extender [ctx ctxs]
  (let [ctx (ctxify ctx)
        init (comp-key :init ctx ctxs) 
        pre (comp-key :pre ctx ctxs)
        af (comp-key :af ctx ctxs true)
        post (comp-key :post ctx ctxs)
        new-ctx (merge-ctxs ctx ctxs)]
    (mk-affect (assoc new-ctx :init init :pre pre :af af :post post))))

(defn extend-fn [ctx & ctxs]
  (if (:freeze? (first ctxs))
    (mk-static-effect ctx ctxs)
    (mk-fn-extender ctx ctxs)))

(def add (extend-fn +))

(def add-and-inc
  (add
   ^:a/f
    #(assoc % :ef (fn [& args]
                    (->> args (apply (:ef %)) inc)))))

(add-and-inc 2 2) ;=> 5

phronmophobic19:03:53

It's not totally clear what the ultimate goal is. Just browsing through the implementation I see a couple of things: I'm pretty skeptical of designs where every input is a "maybe this or that". If every value could be a wrapped or unwrapped value, then you end up with a combinatorial explosion of code paths that can be very hard to read, reason about, or debug. Further, normal functions no longer work and must be converted into a maybe this or that oriented function which reduces reusability. It's hard to tell, but I think you've reinvented the monad. > if-not (some-> args first meta (contains? :a/f)) Metadata on functions is undefined, https://ask.clojure.org/index.php/11514/functions-with-metadata-can-not-take-more-than-20-arguments?show=11515#a11515 Further, :a/f seems to be data and not metadata (ie. mk-affect does not have value semantics since equal inputs do not have equal outputs). Using homophones (eg. affect and effect) for similar, but different concepts is asking for trouble.

phronmophobic19:03:39

This style also reminds of defadvice from elisp. Maybe there's some inspiration to draw from there, https://www.gnu.org/software/emacs/manual/html_node/elisp/Advising-Functions.html

john19:03:31

The goal of it is to allow implementation reuse, rather than having to reimplement everything if a change is required half-way up the composition stack of a given function. It's a hard to describe problem but it was one I faced when building lots of cljs widgets. Being able to branch off versions of existing implementations and change things normally hidden behind the encapsulation of the closure. The real pain came when trying to adapt a component that had an internal managed-component, abstracting away change handlers and state management for the developer, but requiring a reimplementation for every version involving different state management semantics. This led to an seemingly unnecessary amount of code duplication in the codebases. I made a simple experimental component lib with it here https://github.com/johnmn3/comp.el but never got around to creating the example where you have lots of code duplication, as the todolist example was too simple to show it. But that's the general point - to reduce duplication of concrete implementations that can otherwise be shared transparently

john19:03:18

Fair point about the maybe-this-maybe-that. Transducers introduce this and that pathways that are pretty different, but I'm not sure what you mean by "normal functions no longer work" what does that mean? Callers might not know they're calling an extendable function and they don't need to know. And you can freeze the function so it can't be extended if necessary

john19:03:59

Yeah, affect/effect is confusing. Maybe "extensible function" is a better semantic

phronmophobic20:03:04

> Transducers introduce this and that pathways that are pretty different I would differentiate between branching (which may or may not be essential) and values that are "maybe this or that" which requires branching. Except for reduced?, I don't think transducers have an "maybe this or that" values.

phronmophobic20:03:39

What's the difference between an extensible function and a wrapped function?

john20:03:44

Interesting point about the metadata. There's a few other ways to do it, like having a special parameter that switches the mode when its passed in.

phronmophobic20:03:50

> There's a few other ways to do it, like having a special parameter that switches the mode when its passed in. Protocols are often a good choice. They're extensible and it removes branching in the implementation (ie (affect x))

john20:03:12

I just mean the extra arity on like map and reduce, where the transducer version is more open

john20:03:29

But it might be apples and oranges

john20:03:49

yeah, I've done it with protocols before

phronmophobic20:03:28

Right, you could have something that uses deftype that implemented IFn , but is also usable as data via get or assoc.

john20:03:24

Yeah, that's probably best, but the impls diverged moreso between the clj and cljs versions. Just using a parameter passed in is good enough to show the idea of how it works though. The best solution would def use protocols

phronmophobic20:03:03

I think this could also be implemented as a monad. Rather than having every operation take a "maybe this or that", you just have the return and bind operations that bridges the gap with normal functions.

john20:03:18

I'm not sure I get what you mean by "this or that" actually. To the consumer of the function, the "that" is an implementation detail they never have to know about

john20:03:25

That's not their api

phronmophobic20:03:42

Yea, could just be a misunderstanding. I was just looking at the implementation and every function starts with:

(or this
    (that ....))

john20:03:34

oh, that's just so implementers can whole-cloth drop in entirely different definitions of what makes up the machinery of the thing

john20:03:30

Most wouldn't ever use that low level feature. Idea there is that you don't need a version two of extend-fn. Just pass in the version two part

john20:03:04

It's like turning a function inside out, because it can be called from the inside, by the params being passed in

john20:03:17

But maybe that's juts a gimmick... It'd work just as well always calling "extend-fn" when you want to extend the fn

phronmophobic20:03:49

Or you could just use a map

john20:03:55

The only interesting aspect is that you don't have to require in any extend-fn lib because it's already built into the fn your require in

john20:03:13

A map with fn impled on it?

john20:03:37

Yeah, I actually did an extend-via-metadata impl too šŸ™‚

john20:03:59

There's lots of ways to skin that cat

john20:03:27

I've done the map with fn impled on it too

phronmophobic20:03:34

> A map with fn impled on it? It's hard to discuss in the abstract, but I would probably just keep the impl separate. The program manipulates data up until the very end and then there's final transformation that turns the data into a machine/implementation. And there can be multiple data -> machine options available

john20:03:38

in cljs is as easy as (specify! {} (my-fancy-trick...

john20:03:54

or the fn intrface

john20:03:18

And the caller doesn't even have to know they're calling a map šŸ™‚

phronmophobic20:03:20

Eg. I have a big datastructure that represents my blog. I keep transforming and accreting. At the end, I use the data to spit out a desktop website, a mobile website, and an app, or whatever.

john20:03:38

ah right

phronmophobic20:03:16

Maybe I'm having issues, so I use the same datastructure and spit out a website with extra instrumentation and logging.

john20:03:49

That's one possible direction to go. I think there's existing problems it solves though. Not a lot maybe, just some niche situations. Pretty much what OO was invented for building GUIs, you get some of that impl sharing here

phronmophobic20:03:17

I also think OO is bad for building GUIs

john20:03:31

We love to hate OO

john20:03:52

I'm not advocating OO

john20:03:24

And some of the problems with object encapsulation and data hiding are also there with closures

john20:03:23

But there's definitely an issue I've seen with how we do things in clojure where we end up duplicating code because that's just the easiest way to solve a problem because there's no impl reuse in clojure in that way

john20:03:07

We're kinda allergic to impl reuse. Maybe for good reason

john20:03:18

Pretty sure Rich is against it

phronmophobic20:03:33

Maybe I'm actually following what you mean by impl reuse?

phronmophobic20:03:21

Or do you have an example where code is duplicated unnecessarily?

john20:03:59

lol no, it's hard to create, because it usually involves larger codebases

john20:03:09

It's hard to explain lol

john20:03:39

Like, we'll make functions all the time, that are just a composition of 5 or 10 other functions, right?

john20:03:42

We might wrap function 8 to make function 9. But what if function 9 needs function 4 to behave differently, without having to reimpl functions 5 through 8?

john20:03:41

With this, the impl of fn 4 can be exposed, so that 6 8 or 23 can hot swap it out for something else

phronmophobic20:03:58

> But what if function 9 needs function 4 to behave differently That seems like a bad problem to have. Ideally, functions are decoupled and composed together.

john20:03:31

It's not an uncommon problem IMO

john20:03:36

in some code bases

john20:03:51

well, some specific ones I've seen

john20:03:05

But maybe they were bad solutions in the first place

phronmophobic20:03:01

To me, there's a difference between solving coupling by making it easier and solving coupling by taking things apart and decoupling them. I do think clojure tends to actively avoid making it easier to couple things together.

john20:03:23

It's an inherent problem though with closures and data encapsulation though, right?

phronmophobic20:03:36

I don't think so.

john20:03:14

Well, you know, you end up in a situation and you're like, "dang it, I wish I could get to the data hidden behind that closure boundary, hmmm"

phronmophobic20:03:18

Going back to the blog example. To me, that means you stopped working with data prematurely.

john20:03:36

You wouldn't actually use this for that though. You'd just use a map for that, right? This just wouldn't be good for that I think

john20:03:49

Here's another way to look at this. It's like you have interceptor chains on the inputs and the outputs of your function. You can extend the behavior of that function, creating a new version of it, by augmenting the interceptor chains before and after the fn

phronmophobic20:03:51

Maybe I'm just a weirdo, but I would. The components in my UI library are maps (defrecords, not literals).

phronmophobic20:03:03

> You can extend the behavior of that function, creating a new version of it I also think this is the wrong perspective. It's not a new version of the function. It's a different function.

john20:03:24

A new version is a different version, what's your point though?

john20:03:41

I meant to imply that it was a different version

john20:03:11

You're not mutating the behaviours attached to the parent it came from

phronmophobic20:03:48

If it takes a different type of thing, has a different behavior, or returns a different type of thing, then it's not a version of the old function, it's a different function.

john20:03:12

Well, agreed, I didn't mean to imply otherwise

john20:03:25

It's not the same function

john20:03:56

But, it carries all the dna of the old function

john20:03:25

so it can re-impl any parent part in new fns

phronmophobic20:03:02

I think these subtle distinctions are actually important from a design aspect when building larger applications. I would say "parent" function and "re-implement" are tricky, not simple, and difficult to reason about. If at all possible, I would prefer using regular functions and "reuse" over* "reimplement".

phronmophobic20:03:25

I guess it's true that I don't think you should care about the insides of functions.

phronmophobic20:03:23

It seems like it would be helpful to have a concrete example. If you don't think the blog example is a good one, maybe it would be helpful to think about another or even just say why the blog example isn't applicable in order to brainstorm another.

phronmophobic20:03:36

I'm also happy to let bygones be bygones if you don't think this discussion is helpful. I admit I can get carried away sometimes.

john20:03:05

Nah, I love that you're challenging the idea! I'm not convinced about it myself. I just have this strong suspicion and it keeps coming back to me. Maybe I'm just attracted to the simple quine-like nature of the solution

john20:03:10

So, in the readme, you can see this example:

(def el
  (af
   {:as ::el :with [add-props classes]
    :env-op form-1})) ; <- env-op also passes the environment to the op

(def grid
  (el
   {:as ::grid
    :props {:comp mui-grid/grid}}))

(def container
  (grid
   {:as ::container
    :props {:container true}}))

(def item
  (grid
   {:as ::item
    :props {:item true}}))

(def btn
  (el
   {:as ::btn
    :props {:model :button
            :comp  mui-grid/button}}))

(def input
  (el
   {:as ::input :with [hide-required use-state validations]
    :props {:comp mui-grid/text-field}}))

(def form-input
  (input 
   {:as ::form-input
    :props {:style {:width "100%"
                    :padding 5}}}))

(def email-input
  (form-input
   {:as ::email-input
    :props {:label "Email"
            :placeholder ""
            :helper-text "validating on blur"}
    :validate-on-blur? true
    :valid [#(<= 4 (count %))        "must be at least 4 characters"
            #(= "@" (some #{"@"} %)) "must contain an @ symbol"
            #(= "." (some #{"."} %)) "must contain a domain name (eg \"\")"]}))

(def password ; <- abstract
  (form-input
   {:as ::password-abstract
    :props {:label "Password"
            :type :password}
    :valid [#(<= 8 (count %)) "must be longer than 8 characters"]}))

(def password-input
  (password
   {:as ::password-input
    :props {:validate-on-blur? true}}))

(def second-password-input
  (password
   {:as ::second-password-input :with submission
    :valid    [#(= % (password-input :state))
               "passwords must be equal"]
    :fields   [email-input password-input second-password-input]
    :props {:on-enter (fn [{:as _env :keys [fields]}]
                        (ajax-thing/submit-fields fields))}}))

(def submit-btn
  (btn
   {:as ::submit-btn :with submission
    :fields   [email-input password-input second-password-input]
    :props {:variant  "contained"
            :color    "primary"
            :on-click (fn [{:as _env :keys [fields]}]
                        (ajax-thing/submit-fields fields))}}))

#_...impl

(defn form [{:as props}]
  [container
   {:direction "row"
    :justify   "center"}
   [item {:style {:width "100%"}}
    [container {:direction :column
                :spacing 2
                :style {:padding 50
                        :width "100%"}}
     [item [email-input props]]
     [item [password-input props]]
     [item [second-password-input props]]
     [container {:direction :row
                 :style {:margin 10
                         :padding 10}}
      [item {:xs 8}]
      [item {:xs 4}
       [submit-btn props
        "Submit"]]]]]])

john20:03:27

So look at where it says (def password ; <- abstract

john20:03:03

Notice how passwords behaviors and attributes accrete on to the form-input, and then password-input and second-password-input accrete their custom behaviors onto password

john20:03:05

password input, if necessary, in it's impl, can change the width and padding specified in form-input

phronmophobic20:03:11

I'm trying to figure out how this is different than just doing that with maps?

john21:03:53

Well, you could store everything as maps at the top level and have some indirection thing turning them into things that are functions that derive from one another's maps, that'd work too

phronmophobic21:03:11

That makes sense. That's kind of what I was thinking. Just use maps/data. To produce the final artifact, you take the giant datastructure and turn it into the "machine" that runs your application.

phronmophobic21:03:49

At any point along the way, you can accrete cross cutting concerns like logging, instrumentation, apply optimizations, and otherwise.

john21:03:54

Yeah, that's pretty much what it is

john21:03:13

just papers over it with the extend-fn stuff

phronmophobic21:03:54

And you can have multiple choices of how to spit out prod app, debug app, internal tool, debugger, etc. from the data.

john21:03:26

It's kinda like having a user level env normally available to the compiler

john21:03:52

And you can attache compile time effects and runtime effects

clojure-spin 1
phronmophobic21:03:06

For me, the important part is to document the data specification (the semantics of properties and which values are valid) rather than trying to treat intermediate data as functions.

john21:03:42

Like this:

(def add (extend-fn +))

(def bad-key
  (add
   ^:a/f 
    {:init (fn [ctx]
             (println :init ctx)
             (when (-> ctx (contains? :secret))
               (throw (js/Error. "No secrets allowed")))
             ctx)}))

(bad-key 1 2) ;=> 3

(def add-and-inc
  (bad-key
   ^:a/f
    {:secret :sauce
     :af (fn [{:as ctx :keys [ef]}]
           (assoc ctx :ef (fn [& args]
                            (->> args (apply ef) inc))))})) ;=> error: No secrets allowed
So that gets caught at compile time

john21:03:25

That's why I brought in the "affects" idea, trying to differentiate between compile and run time. Though in this impl only init is running exclusively at compile time and pre, op and post are all running at runtime

phronmophobic21:03:15

I'm not sure "compile time" and "run time" make sense without a specific environment. I think just having separate validations that can be applied for specific uses makes more sense.

phronmophobic21:03:36

ie. dev check, staging check, prod check, foo-company-pre-checkin-check

phronmophobic21:03:22

This implementation seems "operation focused" rather than data-oriented.

john21:03:02

I might be using the wrong terms here too. But the point there was that the error there will be thrown at compile time and add-and-inc will never get to be defined.

john21:03:41

Whereas, sticking the throw in an pre or post would not throw until the function was called, potentially, depending on impl

phronmophobic21:03:16

(def bad-key
  {:init (fn [ctx]
           (println :init ctx)
           (when (-> ctx (contains? :secret))
             (throw (js/Error. "No secrets allowed")))
           ctx)
   :op +})

(invoke bad-key 1 2)

(def add-and-inc
  (merge
   bad-key
   {:secret :sauce
    :af (fn [{:as ctx :keys [ef]}]
          (assoc ctx :ef (fn [& args]
                           (->> args (apply ef) inc))))}
   )) ;=> error: No secrets allowed
Here's some pseudo code for what I imagine a more data oriented api might look like.

john21:03:40

Well, merge wouldn't produce that error, right? But I get your point about the data orientation

john21:03:06

You could have a special merge

john21:03:36

That's all this is, taking care of the special-invoke and special-merge, for data defined functions

phronmophobic21:03:37

well, merge+validate, maybe

john21:03:53

In the impls I've been playing with, we comp together functions of like keys for some of the keys

john21:03:03

so that behaviors accrete

phronmophobic21:03:21

maybe because the secrets check isn't the best example, but if you did want something like that, you could have some special helpers for merge+validate for sugar. I'm not sure I'm totally sold.

john21:03:21

so merge-comp-validate-whatever-you-want\

phronmophobic21:03:20

yea, clojure.core/merge might not be enough and you might want a special cool.lib/merge or cool.lib/combine or whatever is actually a good name for it.

phronmophobic21:03:48

I would still want validation to be available separately, even if it's more idiomatic for your use case to combine them.

john21:03:00

For some stuff you'll want a deeper merge too, but you can define those within the data as well

šŸ‘ 1
phronmophobic21:03:50

Yea. The key idea is that it's just a data operation which takes data and returns data.

john21:03:02

Iike, for my components, I'm merging the style maps together, so one :style key doesn't clobber the other

šŸ‘ 1
john21:03:43

See, you don't need merge+validate if you have some merge-magic that allows you to add validation behavior to the thing downstream

john21:03:21

So, an interesting question about this thing is, what is the minimal impl that allows you to build an extensibility system like this, where you can accrete in behaviors like validation after the fact. The above is one of the more minimal versions I've come up with that has a half decent api

phronmophobic21:03:47

> See, you don't need merge+validate if you have some merge-magic that allows you to add validation behavior to the thing downstream That's the thing. I'm not sold on needing the validation at every definition anyway. I definitely don't want merge to magically transmogrify depending on some config.

phronmophobic21:03:23

That's moving away from data orientation to operation orientation and I don't think it helps.

phronmophobic21:03:51

I don't think you can say whether data is valid outside of a particular context. Defining data should usually be contextless (ie. not coupled to a specific use case).

john21:03:08

lol I hear you. It'd still be functional and immutable, but yeah it sounds like it could get hairy

phronmophobic21:03:26

I've worked with those kinds of systems where you need to reconfigure your environment to get things to work together. It then becomes difficult to reuse the same code in a new context like staging, debugging, prototyping, benchmarking.

john21:03:02

Yeah, implicit bindings all over the place, it's a nightmare

john21:03:22

This has similarities and differences from that situation

phronmophobic21:03:43

> So, an interesting question about this thing is, what is the minimal impl that allows you to build an extensibility system like this, where you can accrete in behaviors like validation after the fact. Just have the operations you want a la carte. You can then take the simple stuff and compose it with those ops when it's convenient.

phronmophobic21:03:09

It's super easy to setup your workflow so validation happens on very eval/file change/checkin/git push.

phronmophobic21:03:50

Or not. if you're prototyping.

john21:03:33

well, that example was about form validations, but yeah. Like you said, you could hook in any instrumentation you want

john21:03:09

Anyway, super interesting. I appreciate your critique!

šŸ‘ 1
phronmophobic21:03:22

Interesting discussion!

john21:03:29

I'm still not sold on the idea either

phronmophobic21:03:30

I always learn something.

phronmophobic21:03:45

I'm sure new ideas will come up later after a nap.

john21:03:25

Yeah, it's helpful to get some feedback on these weird ideas sometimes, to see if they have any merit

john21:03:54

It's possible that, even in the gui situation I found this pattern useful for, there's a better way still for that problem and I just missed it. But I still have this suspicion it might be useful in one of those nitches. I'll think about making it less implicit though and looking more like traditional data orientation, rather than breaking the closure boundary rules. That definitely causes a knee jerk reaction and is hard to swallow lol

john22:03:49

Oh, by validation I thought you were referring to the form validation example. But yeah, I agree, and if you store the function maps as just top level maps you could just spec them at compile time. We already have solutions for most of these problems - you def don't need this just to do that. I was just using that to show an example where you can do stuff inside one of these function maps at the time the function instance is instantiated vs when it is called

john22:03:00

"constructor time" is perhaps a better term

john01:03:46

Okay, so here's another impl that keeps maps at the top level:

(defn mk-effect [{:as ctx :keys [pre af post merge-fn effect]
                  :or {pre identity af identity post identity
                       merge-fn merge}}
                 & args]
  (or effect
      (let [res (post (af (pre (assoc ctx :args args))))
            ctx (if res (merge-fn ctx res) ctx)
            new-args (:args ctx [])
            ef (:ef ctx identity)]
        (apply ef new-args))))

(defn ctxify [ctx-or-fn]
  (if-not (map? ctx-or-fn)
    {:ef ctx-or-fn}
    ctx-or-fn))

(defn comp-key [k ctx ctxs & [catch-fns?]]
  (let [old-afn (k ctx identity)
        afn (if (fn? (first ctxs))
              (if-not catch-fns?
                identity
                (first ctxs))
              (k (first ctxs) identity))
        afns (->> ctxs
                  rest
                  (mapv k)
                  (filter identity)
                  (concat [old-afn afn]) 
                  reverse
                  (apply comp))]
    afns))

(defn merge-ctxs [ctx ctxs]
  (let [merge-fn (-> ctx :merge-fn (or merge))
        ctx (apply merge-fn ctx (filter map? ctxs))]
    ctx))

(defn mk-fn-extender [ctx & [ctxs]]
  (let [ctx (ctxify ctx)
        init (comp-key :init ctx ctxs) 
        pre (comp-key :pre ctx ctxs)
        af (comp-key :af ctx ctxs true)
        post (comp-key :post ctx ctxs)
        new-ctx (-> ctx 
                    (merge-ctxs ctxs)
                    (assoc :init init :pre pre :af af :post post))]
    new-ctx))

(defn extend-fn-map [ctx & ctxs]
  (when-let [init (:init ctx)]
    (mapv init ctxs))
  (mk-fn-extender ctx ctxs))

(defn invoke-fn-map [fn-map & args]
  (apply mk-effect fn-map args))
So then you can do the same thing with extend-fn-map and invoke-fn-map like:
(def add
  (extend-fn-map {:ef +}))
;=> {:ef ʒ :init c ...

(def public-add
  (extend-fn-map
   add
   {:init (fn [ctx]
             (when (-> ctx (contains? :secret))
               (throw (js/Error. "No secrets allowed")))
             ctx)}))
;=> {:ef ʒ :init c ...

(invoke-fn-map public-add 2 3)
;=> 5

(def add-and-inc
  (extend-fn-map
   public-add
   {:secret :sauce 
    :af (fn [{:as ctx :keys [ef]}]
          (assoc ctx :ef (fn [& args]
                           (->> args (apply ef) inc))))}))
;=> error: No secrets allowed

phronmophobic19:03:00

šŸ‘ I think as the approach becomes more data oriented, the implementation matters less and the data specification and semantics become more important.

phronmophobic19:03:08

Iā€™m still not sure I totally understand the intended usage. My intuition is that you still want a way to separate data definitions from validation.

john19:03:28

It's kinda like modeling functions as data and then manipulating them like macros but with functions, for the purposes of sharing implementation data between functions even after they're defined

john19:03:18

No so much about modeling the world or problem domains, just modeling functions and their various phases, inputs, outputs, construction, finally, validations, whatever properties you want, but about the behaviors of functions

phronmophobic19:03:12

> for the purposes of sharing implementation data between functions even after they're defined that sounds like something you specifically want to avoid Itā€™s hard to tell if youā€™re trying to model workflows, data pipelines, or something else

phronmophobic20:03:03

IMO, functions shouldnā€™t have phases, but phases may have functions

john20:03:07

I think you might not need to avoid it when you're dealing with intrinsically hierarchical composition of a large number of functions

john20:03:34

So it's a niche use case I think

john20:03:22

And not a solution that should be used as the default

john20:03:27

But sometimes I think we just might genuinely want impl sharing. Do you really think a case can be made that impl sharing is never good?

john20:03:46

I think clojure is a testament to how much we don't need it, on average

john20:03:03

But there seems to be a vacuum for that niche, for when it is actually good (unless it's never actually good!)

john20:03:02

But how can decomposing functions in to data be bad? šŸ˜†

phronmophobic20:03:38

> Do you really think a case can be made that impl sharing is never good? I don't. I'm not sure if it's good in this case and I'm also not sure this is a good technique if it is useful.

phronmophobic20:03:22

A more detailed rationale or example use case would be needed for me to give any more specific, useful feedback. Right now, I only have vague intuitions that the approach could be either more general or simplified.

john20:03:08

Yeah I agree, it's kinda amorphous without concrete examples

john21:03:42

Again, I didn't think this example ended up doing the concept justice, because todomvc doesn't require a large hierarchy of components, but here you can see an example. Here you can see new-todo derives from todo-input: https://github.com/johnmn3/comp.el/blob/main/ex/src/todomvc/views/comps.cljs#L56

(def todo-input
  (comp/raw-input
   {:as ::todo-input :with [styled/todo-input a/void-todo]
    :props/void :af-state
    :props/ef (fn [{:keys [on-save on-stop af-state]}]
                (let [stop #(do (reset! af-state "")
                                (when on-stop (on-stop)))
                      save #(do (on-save (some-> af-state deref str str/trim))
                                (stop))]
                  {:auto-focus  true
                   :on-blur     save
                   :value       (some-> af-state deref)
                   :on-change   (fn [ev] (reset! af-state (-> ev .-target .-value)))
                   :on-key-down #(case (.-which %)
                                   13 (save)
                                   27 (stop)
                                   nil)}))}))

(def new-todo
  (todo-input
   {:as ::new-todo :with styled/new-todo
    :props {:placeholder "What needs to be done?"
            :af-state (r/atom nil)
            :on-save #(when (seq %)
                        (dispatch [:add-todo %]))}}))
It just mixes in some styles and properties to augment todo-input. Normally to do this, we'd just parameterize those attributes and just merge them in in the todo-input fn. But below new-todo you can see existing-todo needs to have special behaviors depending on state of values passed to it (editing, id and title).
(def existing-todo
  (todo-input
   {:as ::existing-todo :with styled/edit-todo
    :props/af (fn [{:keys [editing]
                    {:keys [id title]} :todo}]
                {:af-state (r/atom title)
                 :on-save #(if (seq %)
                             (dispatch [:save id %])
                             (dispatch [:delete-todo id]))
                 :on-stop #(reset! editing false)})}))
(Because the composition of todo-input is no longer locked behind a closure boundary, the :props/af behavior of existing-todo is merged into todo-input. In normal composition, we would have to rewrite todo-input.) Normally when building these components, we close over various aspects of their implementation. In the above example, when composing regular reagent-like component functions, we might design todo-input to handle new-todo's modifications by passing through props to todo-input. But then, suddenly, a customer wants to see existing-todos but, when we go to implement it, we realize that the updates it passes to todo-input need a reference to the editing status of the todo, which will require a reimplementation of todo-input, so in todo-input you can parameterize a function that takes the editing status, letting todo-input do the work of passing the editing status to existing-todo's passed in function that returns the new attributes. But, unfortunately, we've added 10 more pages to the app since we defined todo-input and if we change the behavior of todo-input now then we'll need to do lots of testing to make sure we didn't break all these other downstream consumers so, instead, we just decide to make todo-input2, with this new ability that exiting-todo needs, simply because it's easiest to just copy and paste the code and just add the one change, only call it from the new functions and then call it a day. Then you end up with all these vertical compositions with massive duplications between impls because it's just easier than fishing in new parameters through all the functions in it's composition hierarchy and then testing the whole world downstream of those changes. And if you do, you end up with todo-input taking on massive amounts of complexity to handle all possible demands of all possible callers, parameterizing more and more. With this solution, existing-todo can add behavior to todo-input downstream - just the minimal amount it needs from todo-input to do its job, all without having to change the impl of todo-input.

john21:03:00

And in a lot of react code bases, we'll have a "managed-component" that will wrap an [:input ... element. We want lots of advanced features out of that state management component - form validations, various handlers, change/click/blur, default values, text parsing, text formatting - tons of features that keep growing, until finally your managed component function is hundreds of lines long, handling all possible requirements for all possible callers. Then, suddenly, there's a new feature request and it's going to require a change to manged-component - quick, somebody get Joe, he's the last one that understood that hairball, etc etc. With this scheme of function data extension, we can push some changes into the impl of managed-component downstream, via a minimal change, without having to reimplement managed-component and test its consumers or have to support version 1, 2 and 3 in parallel.

john23:03:16

Another thing I didn't like about that todomvc example, I didn't use the compel framework's managed component. I used raw-input so as to try to stay as true as possible to the way the reframe example todomvc app was doing state management. It would have looked a lot cleaner getting rid of all the local state atoms and letting the framework abstract state management away. I just wanted to keep the comparison about function composition and adding implicit state management would have been less an apples to apples comparison.

john21:03:22

So there's two main benefits there: 1. Don't change your code, grow it: instead of changing existing mechanisms to accommodate new features, make new versions and leave the old ones there, and 2. Don't grow by duplication but by sharing: we could achieve pure growth by copying and pasting a new version on each new feature, but then we have to support multiple copies, fixing bugs in multiple places instead of one. If we share implementation, we can achieve change through growth without code duplication

phronmophobic22:03:37

What's the difference between this and normal function composition? eg.

(def add-and-inc (fn [& args] (inc (apply + args))))

phronmophobic22:03:41

(def new-f
  (fn [& args]
    (do
      (before-stuff)
      (let [new-args (modify-args)
            result (apply old-f new-args)]
        (after-stuff)
        result))))

phronmophobic22:03:14

I'm also still hung up on how the behavior is overloaded. If it gets called with a "context", it returns a new function, otherwise, it applies the function. Is that right? How does it know if which "mode" it's being called in (ie. how does it know if the argument is a "context thing")?

john22:03:09

add-and-inc closes over + and we can no longer update the semantics of + for someone who wants all the beautiful implementation work in add-and-inc but just wants something a little bit different from the way it uses +. Here, we can get in between the inc and the + in add-and-inc, as a user consuming add-and-inc, because add-and-inc carries a description of the history of its composition, which can be decomposed later and recomposed.

john22:03:58

The most efficient thing would probably be a protocol/deftype thing like you said, to make the parameter checking fast. But it's really just carrying this impl history as metadata on the function or carrying it on the inside and dumping in from a special param. I did the later just because it's simpler, so as to get the idea across. But a Better Implementation would involve protocols I think.

phronmophobic22:03:26

Whey is invoking overloaded with both extension and normal usage?

john22:03:29

And you don't have to have it be a parameter based signal for the mode, you can call it from the outside on every extension, (extend-fn foo bar ...

john22:03:46

Yeah, it doesn't matter either way. In this implementation, the fns themselves are very much carrying their implementation history with them, so it just felt more natural to use invocation for both modes. It is a function of the function itself that your calling when you extend it.

phronmophobic22:03:47

For me, overloading the invocation is confusing.

phronmophobic22:03:43

It seems like these functions should also be more data-like: Eg.

(def +s
  (af/fect
   {:as ::+s
    ;;  :with mocker
    :op +
    :ef (fn [{:keys [args]}]
          {:args (apply strings->ints args)})
    :mock [[1 "2" 3 4 "5" 6] 21]}))

(+s "1" 2)
;=> 3
(:as +s) ;; ::+s
(:mock +s) ;; [[1 "2" 3 4 "5" 6] 21]
(keys +s) ;; (:as :op :ef :mock :with)

((assoc +s
        :op -)
 5 4) ;; 1

phronmophobic22:03:23

It's kind of hard to understand what's going on because I don't really know what half of these attributes do like :af/props :af, :with, etc.

john22:03:14

That's a good idea regarding the data

john22:03:07

Yeah, :with adds more ctxs from other affects, so you don't have to have single inheritance. You're basically mixing in the other affect contexts while composing their affects like you would with single inheritance path

john22:03:07

:props/af is a special impl of af that affects the :props key, which is introduced by a props affect. It's for dealing with props html elements

john22:03:43

:props/af is defined as an :af in the props affect

john22:03:45

That's all built in compel - the props affect - because it's not needed in the base af.fect lib. We just extend the behavior of af.fect from the outside

john22:03:33

:with can take one affect or a vector of multiple affects to mixin

phronmophobic22:03:59

Do you have any other use cases other than UI components? It's hard to tell if this has general applications or just trying to manage the goofiness of UI programming.

john22:03:10

I think it's maybe a 5% situation, not often, but it can probably be many different shapes

john22:03:18

Most stuff done in libs doesn't require massive duplication. That's why it's a lib, right? It's more often in applications that live over time

john23:03:05

When you have some massive managed-component like function that sits half way up the composition hierarchy for like 50 or 100 other functions, down various branches, and changing it becomes a very sensitive operation

john23:03:04

On the backend, if you have some api that has hundreds of endpoints and something in the stack is acting like a managed-component for all these paths, maybe

phronmophobic23:03:07

That's why I was asking for use cases outside of UI programming. I tend to think a lot of challenges in UI programming are fundamentally due to the underlying OO foundation which UI frameworks don't really address.

john23:03:52

Clojure is very good at not needing it really

john23:03:34

It's so easy to change code, we often are better off just adding the new feature to managed-component

john23:03:56

If it'll never get that complex, no big deal

phronmophobic23:03:09

The cool thing about pure functions is that the only thing you need to know about them is what arguments they require and return value to expect. Unfortunately, UI components are not even close to pure functions (even in clj/cljs).

john23:03:40

Well what I'm talking about can still be pure functions

john23:03:09

But we are altering the semantic of an upstream function, which just feels wrong at first lol

john23:03:07

It feels more variable because it can be changed. But the change still flows in the direction of impl. You're not actually changing the upstream function for other callers

john23:03:32

But yeah, in the UI, we often make them non-pure very quickly

phronmophobic23:03:58

It seems like in your todomvc example, the only attributes that are used are :af/prop related (and :with which also just uses :af/prop related stuff)?

john23:03:36

:with is built into af.fect

john23:03:09

:props/af is built in the comp.el library, on top of af.fect

phronmophobic23:03:04

Right. It seems like most of this stuff isn't really about modifying args and return values, but dealing with props.

john23:03:17

Yeah, in the context of the UI, most of what you're going to want to do is update those props and pass them around. We could have done all that stuff in an :af but then we'd have to get the :props out of the context every time we want to update the props. :props/af just gives you the ability to focus your update to just the :props key within the context.

phronmophobic23:03:31

the todo example also doesn't seem to have any example of "modifying a component up the chain".

john23:03:50

And it's super convenient that a downstream consumer of a component can update the props of an upstream component it's calling in an ad hoc basis, without affecting other callers

phronmophobic23:03:18

> And it's super convenient that a downstream consumer of a component can update the props of an upstream component it's calling in an ad hoc basis, without affecting other callers What's an example of this?

john23:03:43

existing-todo above. It's adding a props affect that happens upstream of it's parent's props effect

phronmophobic23:03:25

Can't that be done with regular function composition?

(def existing-todo
  (fn [{:keys [props] :as m}]
    (let [{:keys [editing]
           {:keys [id title]} :todo} props]
      (todo-input
       (assoc m
              :props
              {:af-state (r/atom title)
               :on-save #(if (seq %)
                           (dispatch [:save id %])
                           (dispatch [:delete-todo id]))
               :on-stop #(reset! editing false)})))))

john23:03:41

Yeah but now a caller of existing todo, if they want different semantics out of how todo-input works, they can write a new todo-input, but they're still going to have to write a new existing-todo too

john23:03:13

Because we're closing over these details

john23:03:41

Now we have todo-input1 and 2, and existing-todo2, all because special-existing-todo needed something special out of todo-input1 that it didn't have. Rewriting todo-input1 is one thing. But having to rewrite existing-todo too sucks. It shouldn't have to change just for special-existing-todo. special-existing-todo can simply point existing-todo to a different version of todo-input, for just its call

phronmophobic23:03:51

Ok, now I'm convinced that this could be simplified.

phronmophobic23:03:01

at least for this use case

john23:03:46

Wanna see it!

phronmophobic23:03:21

For existing-todo, it seems like the problem is that it doesn't actually care about todo-input

john23:03:14

doesn't care, in the sense that it can augment its semantics?

john23:03:59

Oh, you're brewing an idea here

john23:03:30

If you designed todo-input such that everything it did was parameterized, you could feed it right down through the props, and do that for a chain of callers, letting downstream ones signal upstream ones just by passing that props context along

john23:03:08

But when you find yourself parameterizing everything about some deep function, maybe it should just be a fully parameterizable function, built for doing that

phronmophobic23:03:44

Sorry for the slow response, but part of the trouble is that I'm not super familiar with comp.el which is built on af.fect and I'm not super familiar with re-frame which is built on reagent which sits on a mountain of other stuff.

john23:03:35

No worries, I appreciate your thoughts on it

phronmophobic23:03:58

so in the example, todo-input is essentially just a text-input that saves on enter?

john23:03:18

Yeah, it followed the re-frame todomvc method of how it handled state as much as possible

john23:03:04

I think they just wanted to show that re-frame could be mixed with local state

john23:03:37

Which made for an interesting test for comp.el. Would have been a lot cleaner with just re-frame, abstracted away. As long as every input element you hang in the hiccup tree has a unique id, the framework should be able to handle state for you transparently

john23:03:31

And then state management just gets defined by a use-state affect that gets mixed in to any input elements that need to be managed

phronmophobic00:03:38

So editing is a local prop, but I assume you're not supposed to be able to edit more than one todo at a time, right?

phronmophobic00:03:59

actually, it's not a prop, it's a local r/atom

john00:03:37

temporarily stored in the props

john00:03:27

todo-input voids that key from the :props later though, so it doesn't end up in your html props: :props/void :af-state

john00:03:56

Might not be the best api

john00:03:02

having to do that

john00:03:07

it can be done many ways

phronmophobic00:03:23

I'm trying to show how I might write it, but since editing a property that belongs to a list or an app, I probably wouldn't have it as a local prop.

john00:03:00

And I'm not sure I got the api right with af.fect either, more-so just proposing that the idea in general might be useful

john00:03:57

I could have stored the atom in the outer context

phronmophobic00:03:04

Do you ever modify existing props beside callbacks like on-save and on-stop?

john00:03:13

and did all operations in :af instead of :props/af

john00:03:52

Styles are getting merged into the styles of the parent components

phronmophobic00:03:19

I gotta go walk the dog, but I'll think about this some more. Maybe a little walk will help.

john00:03:25

which gets handled by the props affect I believe

phronmophobic16:03:54

For this use case, it seems like there's the render function (eg. todo-input) and then there are functions for modifying the input arg to the render function, (eg. existing-todo). The fns that modify the input don't need the render-fn and you can just leave it out:

(defn existing-todo
  [{:keys [editing]
    {:keys [id title]} :todo}]
  {:af-state (r/atom title)
   :on-save #(if (seq %)
               (dispatch [:save id %])
               (dispatch [:delete-todo id]))
   :on-stop #(reset! editing false)}[m])
which could be used like:
(todo-input (-> {}
                (existing-todo)
                (other-modifier)))

phronmophobic16:03:29

It's not really clear if having a way to convey the render function alongside its modifiers is useful, but if it is, then you could just use a map:

{:render todo-input
 :middleware [existing-todo
              sparkly
              etc]}
The idea here is to describe a UI component as data.

phronmophobic16:03:35

In many ways, the code ends up looking similar, but for me, the framing of here's a map that describes a component is easier to learn and reason about than trying to frame it in terms of an "extensible function".

phronmophobic16:03:51

It also means you don't really have to learn anything new to make a slightly different component:

(assoc comp
 :render special-todo-input)

phronmophobic16:03:36

Intuitively, I have a strong skepticism about "extensible function" as a concept. Functions are already extensible: function composition, multimethods, arity overloading, protocols, or accepting extensible data like maps. If you want to retain information that can be further manipulated, use a map (or other data).

phronmophobic16:03:29

I actually really like the approach from https://vimeo.com/861600197

phronmophobic16:03:44

That being said, I do think "invokable data" (which is maybe the same thing as the "extensible function" with a different framing) might be an interesting idea for other use cases. It's mostly that if you can use pure data and functions, it should be preferred. At least for me, it took quite some time to start to understand the af.fect API which has its own language/interface for manipulating these fns. At least in its current form, it's a bit of a rabbit hole. existing-todo derives from todo-input which derives from comp/raw-input which derives from el, etc. If it was just a map, I feel like I can examine the end result and not really worry about how it was derived, but as an extensible function, I feel like I need to not only understand the algebra of extensible functions, but also understand existing-todo's whole ancestry which is opaque in the current iteration.

john19:03:20

Yeah, there's pros and cons to the pure data UI. I think it's a better tradeoff than HTMX. You can drive the whole thing from the backend and just ship the hiccup. You do end up making a lot of DSLs to wrap things that need to be functions on the frontend but once they're written it works. But you still end up with some of the impedance mismatch of htmx, for those situations where you genuinely need to pass a lambda. And when you go pure data on the front end, you'll still have these massive reduction/transformation steps where you dump the whole world in, all the DSLs get computed into functions, lots of magic happens that only a few people on the team understand, and out the other side magically pops out a new world made out of actual functions. And there'll often be 3 or 4 or those reduction steps, making it very hard to track where everything is going. I've worked on an app built completely out of pure data, with this chain of world transformations, and while I appreciated the beauty of the abstraction (and the ability to transparently migrate some parts between the front end and back end, etc) I'm not sure I'd want to have to support that kind of architecture again. Every time you want to add something dynamic to the system, you have to update so many things in so many places. Not so bad for a turbotax like app, where every page is similar, but with just different text and a half dozen types of form elements, sure pure data can express that domain easy enough. But if you have some general purpose dashboard that needs to change fast for a diverse audience then evolving that pure data app fast is going to be hard IMO.

john19:03:55

Yeah, I like this invokable map idea, and maybe rebuilding the core from the ground up around that idiom. Might simplify it more

john19:03:50

I did make a utility fn for trying to keep track of the rabbit hole where an affect came from. Regular functions are just as opaque though. For the +sv affect in the readme, the utility fn prints:

{:args (),
 :finally [:base],
 :was :user/+s,
 :is :user/+sv,
 :joins [:mock :void :base],
 :affects [:mock-0 :with-0 :void-0 :base],
 :op #object[cljs$core$_PLUS_],
 :void [:with :mock],
 :effects
 [:user/+sv-0
  :user/+s-0 
  :children-0 
  :base], 
 :mocks [[1 [2]] 3]}
So at least you can chase down everything it's made of, which might arguably be harder with just functions. We could do more here to store all data for all affects being composed, so that we could print out the entire context maps for every ancestor, but the functions on some of those keys are still going to be opaque, unless you turn the whole outer world into a dsl that lives in your data

john20:03:02

But I mean, everything about this implementation is a prototype. I would never recommend using this in prod, where simply passing a map with :as in it changes the mode of a function. That's destined to blow up somewhere. I'm deliberately keeping some aspects of the impl simple, so as to just show the concept. A real implementation would probably involve protocols/deftype (or maybe defrecord) and have much better instrumentation for tracing back the composition of an affect. Also, interceptor chains might be a better abstraction for people to manage the ordering of affects. We should probably delay comping until the very end, allowing you to put a new fn between any two fns in the stack. I didn't take it that far, in terms of granularity, but a final solution should probably be able to get fine grained like that, perhaps via a lower-level api.

phronmophobic20:03:58

> a final solution should probably be able to get fine grained like that, perhaps via a lower-level api. My idea would be to have a data specification, not an API. Obviously, there would be helper functions that make the common case easy, but otherwise, it would be purely descriptive.

phronmophobic20:03:50

> Yeah, I like this invokable map idea, and maybe rebuilding the core from the ground up around that idiom. Might simplify it more I think the usage and implementation would end up looking pretty similar, but I think there's a huge leap in reuse if the way you read and create these things is just using normal data functions. I think it also aids in understanding.

phronmophobic20:03:45

> For the +sv affect in the readme, the utility fn prints: The goal is not need a specific utility function. You should be able to inspect the result like any other data or using familiar tools like portal and clerk.

john20:03:07

What is the semantic for how downstream maps can affect upstream maps? Downstream maps should be able to shadow values of upstream maps, redefine them, wrap the ins and outs, delete them. Do you have an idea how that might look, using purely a description language, that is simple?

phronmophobic20:03:53

downstream maps don't affect upstream maps.

john20:03:15

Well, they alter their own perception of the upstream map the merge into

john20:03:54

That's the trick here, allowing callers to push customization upstream

phronmophobic20:03:08

I know I keep harping on the PLOP related terms, but I really do think the perspective matters.

john20:03:00

Well, my language sounds like I'm actually changing the upstream function lol it's confusing

phronmophobic20:03:17

let's say you have:

{:render todo-input
 :middleware [existing-todo
              sparkly
              etc]}
you can create a new map that uses special-todo-input instead of todo-input like so:
(assoc comp
 :render special-todo-input)

phronmophobic20:03:24

mostly psuedo code, but you could mark your sparkly todo input dull with something like:

(def existing-todo :existing-todo)
(def sparkly :sparkly)
(def dull :dull)
(def etc :etc)
(def todo-input :todo-input)

(def comp
  {:render todo-input
   :middleware [existing-todo
                sparkly
                etc]})

(require '[clojure.walk])
(clojure.walk/postwalk-replace {sparkly dull}
                               comp)
;; {:render :todo-input,
;;  :middleware [:existing-todo :dull :etc]}

phronmophobic20:03:27

And so most of the design is specifying the semantics of the attributes like :render :middleware , etc.

phronmophobic20:03:57

For convenience, there will probably be helpers for common transformations and initializers.

phronmophobic20:03:32

As well as helpers for inspection and validation.

phronmophobic20:03:56

Some of the motivation for this approach is also from The Design of Everyday Things. It talks about how it's easier to reason wide, flat decision trees or narrow, long decision trees. This is trying to turn the problem into a wide, flat decision tree since all the matters is the resulting data structure. What you don't want is a medium width, medium depth decision tree, which I think is where the mutable inheritance model ends up.

john20:03:41

Yeah, as long as downstream users are able to get to the original data of any impl map it its ancestry, you can use your regular data manipulation fns to update any function way up the chain, however you want. My existing impl just keeps that data around for you as a hidden value, but if it's more like callable maps that could be simplified

john20:03:10

Hmm, I don't know if I agree with that intuition...

john20:03:15

It's pretty abstract to get opinionated about though

john20:03:22

"What's better, data or functions?" lol

john20:03:20

Different question, but it just seems similarly hard to qualify

phronmophobic20:03:23

I think one other subtle difference is that ancestry is this ordered thing and it matters where things came from. With the map based approach, the "history" doesn't matter. It's not a map that derives from another map. It's just a map with X, Y, Z transformations. It doesn't matter how they got there.

john20:03:30

I think you want to be able to reuse some of the decisions made.. Perhaps your parent decided to delete one of the keys of your grandparent?

john20:03:55

You can always reintroduce that key, but we want to reuse the parent's decision when possible

john20:03:35

So you get some of the existing benefits of branching composition between fns

phronmophobic20:03:00

I'm saying you should absolutely avoid caring about how the map was produced. You should only care if the result has or doesn't have an attribute.

john20:03:12

lol I totally misread you... this space is hard to talk about

john20:03:14

oh, I was riffing on the "order doesn't matter" sentiment

phronmophobic20:03:29

At least for me, it's taken a very long time to internalize the philosophy of "just use maps" and I still get it wrong sometimes.

john20:03:41

I guess it doesn't matter that a particular part of the shape of the current map came from the parent or the grandparent... From an organizational perspective, some might like to update attrs in a way that is associated with the map/affect it came from, but I suppose that's just projected organization and not strictly necessary. The history of composition can be traced in code, like everything else

john20:03:52

Well, some of these affects that are being composed together, between +, +s and +sv, for instance - the order of how those transformations over arguments are applied matters

john20:03:09

And, we should also be able to stick an effect between + and +s, in the map for +sv, not just before or after both of them

john20:03:46

For full granularity

phronmophobic20:03:06

right, you might have an ordered sequence of middleware as part of your specification.

john20:03:20

I'd recommend using single inheritance as much as possible, bringing in mixins horizontally only when necessary

john21:03:06

So, perhaps like {:as :foo :from bar :with [x y z] ...

john21:03:22

where from is the direct parent

phronmophobic21:03:27

I'm not sure what you mean by inheritance, but I don't think you want it.

john21:03:53

Well, you're pulling in some behaviors from those things in the middleware

john21:03:08

that's pretty much your parent maps, sotospeak

phronmophobic21:03:53

just create a new map with the attributes you want based on "merging" the "parent" with any new attributes.

john21:03:19

Right, I'm thinking you were thinking the middleware vector would contain these maps that get merged in, but their ordering is used to determine how any functions are ordered that need to line up

phronmophobic21:03:58

yea. it seems like you need some way to run a series of transformations on the input.

phronmophobic21:03:44

since you don't have the input until later.

phronmophobic21:03:35

for other attributes, you have all the info you need and can just use a new value for the attribute or remove the attribute as needed.

john21:03:08

Okay, there's still questions I have about this route, but I think we're getting pretty far into the weeds where it'd be easier to talk about if we just had an implementation. I'm going to ruminate on an invokable map impl. I'd like a prototype impl to be as similar as possible across clj and cljs, so I'll think about it.

john21:03:37

because some of these keys, the work they do is not on the inputs but on the environment itself (the currently merged history of maps, depending where we are in that chain). That's all some affects do, update the map for you so you don't have to

john21:03:37

But yeah, some of this stuff would shake out better in an invokable map impl, where a lot of the affect composition can just be done with fns we use on maps

phronmophobic21:03:55

I don't think you need a key that edits the map. You should just be able to edit the map

john21:03:56

from the outside

john21:03:49

right, you could define that fn and apply it from the outside, as opposed to it being a trait within the map

john21:03:58

which has tradeoffs

john21:03:45

We'll see, I gotta flip it - hang the fn off the data instead of the data in the fn, then see if some of your suggestions can simplify it further

phronmophobic21:03:20

šŸ¤ž . hopefully, I didn't encourage you down a more complicated path!

john21:03:48

Cool, I'll let you know what I come up with. Thanks for placing your seasoned eyes on this problem space, really appreciate your intuition here

john04:03:02

So records actually work pretty well for this:

(ns af.fect2
  (:require [clojure.pprint :as pp]))

(defn run-af [env]
  (let [afs (:af env [])
        op (:op env (fn [& args] args))
        new-env (->> afs (reduce (fn [arg af] (af arg)) env))]
    (fn [& args]
      (let [ins (:in new-env [])
            new-args (->> ins (reduce (fn [arg in] (apply in arg)) args))
            res (apply op new-args)
            outs (:out new-env [])
            out-res (->> outs (reduce (fn [arg out] (out arg)) res))
            fin-env (assoc new-env :res out-res)
            fins (:finally fin-env [])]
        (->> fins (reduce (fn [arg fin] (fin arg)) env))
        out-res))))

(defmacro daf [afname ctx]
  `(do (defrecord ~(symbol (str ">" afname)) []
         clojure.lang.IFn
         ~@(->> (range 22)
                (map (fn [n]
                       (let [args (for [i (range n)] (symbol (str "arg" i)))]
                         (if (empty? args)
                           `(~'invoke [this#]
                                      ((run-af this#)))
                           `(~'invoke [this# ~@args]
                                      ((run-af this#) ~@args)))))))
         (~'applyTo [this# args#]
           (apply (run-af this#) args#)))
       (def ~afname
         (merge (~(symbol (str "->>" afname)))
                ~ctx))))

(daf affect
  {:id :affect
   :af []
   :in []
   :op (fn [& args] args)
   :out []
   :finally []}) ;=> #'af.fect2/affect
affect ;=> #af.fect2.>affect{:id :affect, :af [], :in [], :op #function[af.fect2/fn--7889], :out [], :finally []}
(pp/pprint affect)
; {:id :affect,
;  :af [],
;  :in [],
;  :op #function[af.fect2/fn--7889],
;  :out [],
;  :finally []}
(def a+ (merge affect {:id :+ :op +})) ;=> #'af.fect2/a+

(pp/pprint a+)
; {:id :+,
;  :af [],
;  :in [],
;  :op #function[clojure.core/+],
;  :out [],
;  :finally []}
(apply a+ 1 (range 30)) ;=> 436

(a+ 1 2 3 4 5) ;=> 15

(def a_inc_+_dec
  (-> a+
      (assoc :id :inc-+-dec)
      (update :in conj (fn [& args] (mapv inc args)))
      (update :out conj #(dec %)))) ;=> #'af.fect2/a_inc_+_dec
    ;;   (update :finally conj (fn [res] (println :done! res)))))

(pp/pprint a_inc_+_dec)
; {:id :inc-+-dec,
;  :af [],
;  :in [#function[af.fect2/fn--7901]],
;  :op #function[clojure.core/+],
;  :out [#function[af.fect2/fn--7903]],
;  :finally []}
(a_inc_+_dec 1 2) ;=> 4

(def more-stuff (assoc a_inc_+_dec :more :stuff))

(pp/pprint more-stuff)
; {:id :inc-+-dec,
;  :af [],
;  :in [#function[af.fect2/fn--7901]],
;  :op #function[clojure.core/+],
;  :out [#function[af.fect2/fn--7903]],
;  :finally [],
;  :more :stuff}
(more-stuff 1 2) ;=> 4
I'm going to see if it scales with comp.el

šŸ†’ 1
phronmophobic05:03:50

you can probably use map->MyRecord directly instead of ->MyRecord + merge

john05:03:07

Yeah I barely ever use records

john05:03:28

I forgot, what's the difference between kvs added after the record is made? Vs defined in the vector in its definition? The original ones have faster lookup or something?

phronmophobic05:03:08

I don't remember the performance differences, but the I think the record will always contain the key if it's included in the definition

phronmophobic05:03:30

(get my-record :defined-key :not-found) ;; nil

phronmophobic05:03:41

and I don't remember the exact behavior, but dissoc is weird for defined keys. It either converts it to a map or sets the key to nil. I can't remember which.

phronmophobic05:03:18

For another similar lib, I've used defrecord without specifying any keys in the definition.

john05:03:03

I might add those half dozen defaults if there's a perf benefit or whatever. But yeah, can't get rid of them I think

john06:03:24

Posting this back to the channel. With the help of @smith.adriane, we've winnowed it down into more data oriented approach:

(defn run-af [env & args]
  (let [afs (:af env [])
        op (:op env (fn [& op-args]
                      (case (count op-args)
                        0 nil
                        1 (first op-args)
                        op-args)))
        new-env (->> afs (reduce (fn [arg af] (af arg)) env))
        ins (:in new-env [])
        new-args (->> ins (reduce (fn [arg in] (apply in arg)) args))
        res (apply op new-args)
        outs (:out new-env [])
        out-res (->> outs (reduce (fn [arg out] (out arg)) res))
        fin-env (assoc new-env :res out-res)
        fins (:finally fin-env [])]
    (->> fins (reduce (fn [arg fin] (fin arg)) fin-env))
    out-res))

(defmacro daf [afname ctx]
  `(do (defrecord ~(symbol (str ">" afname)) []
         clojure.lang.IFn
         ~@(->> (range 22)
                (map (fn [n]
                       (let [args (for [i (range n)] (symbol (str "arg" i)))]
                         (if (empty? args)
                           `(~'invoke [this#]
                                      (run-af this#))
                           `(~'invoke [this# ~@args]
                                      (run-af this# ~@args)))))))
         (~'applyTo [this# args#]
           (apply run-af this# args#)))
       (def ~afname
         (~(symbol (str "map->>" afname)) ~ctx))))

(daf add {:op +})

(def add-and-inc
  (-> add
      (update :out conj inc)))

(add-and-inc 2 2) ;=> 5

Ludger Solbach07:03:08

I didn't follow the complete discussion but I find the concepts really interesting. It reminds me of Aspect oriented Programming for FP.

john14:03:17

Thanks, yeah, I can see that. Difference here I think is that we're not "cross cutting" as much as cutting down the center of our functional pipeline. You could add orthogonal concerns, like for logging or something, but you can also patch in function behaviors without having to change existing code. If our pipelines are up and down and "cross cuts" are horizontal, this is more like a vertical version of AOP I think

john14:03:06

It also doesn't need a whole program preprocessor for "weaving" in behaviors, since we're keeping our functions as data that can be manipulated at runtime

Ludger Solbach16:03:07

Much of AOP is also possible with interceptors like adding behaviour in pedestal. In Java applications, without using AspectJ, you could resort to e.g. ServletFilters as interceptors, which is basically the same mechanism as in pedestal. Interceptors are a bit limited, of course becouse it's just one join point that you can instrument. As far as I understand your design, it's more flexible than that.

phronmophobic16:03:18

Yea, I would also compare this approach to AOP.

phronmophobic16:03:23

For the design, I think the most important part is thinking about which attributes to support and what their semantics should be.

john16:03:25

Yeah, and that's still up in the air. I'm just spitballing what might be good semantics but I've been trying lots of different ones, even in these impls. My hope is to provide just the minimal thing that allows for others to build anything they could want on top of it.

john16:03:54

I don't know if I'd say my design is more flexible than interceptors. Perhaps in the sense of being simpler. But I do like how the enter and leave of interceptors give an interceptor author the ability to update both the upstream and downstream of a given interceptor. I think people are going to need easy ways to manipulate these chains even after they're defined. So I'm thinking about a version where :in, :op and :out are interceptor chains. Or a version where there's just one interceptor chain, with the :op at the end, and :enter constitutes the :in and :leave constitutes an :out.

john16:03:13

In the above example, you could just update the order of any of those vectors because they're being stored as data, so you can use any data slicing methods your prefer.

john16:03:46

Simpler and more general, but could get messier, whereas interceptors might bring more sanity

phronmophobic16:03:56

> I think people are going to need easy ways to manipulate these chains even after they're defined. That's actually one of the things I specifically don't like about interceptors. You're essentially creating a mini virtual machine.

john16:03:17

Yeah, I have beef with interceptor complexity too lol

john16:03:04

It's the best try at solving that kind of problem I know of - changing dispatch semantics in a pipeline over time

john16:03:15

In an organized way

john16:03:23

Yeah, interceptors are a later possible feature I think. You can do everything you want with access to that vector

phronmophobic16:03:24

For designing the semantics, I think it would be helpful to have a rationale or problem statement written.

john16:03:25

"Tired of changing your code every time a new feature is requested? Use ThisThing and add changes that only affect callers that need the new feature, without breaking existing callers (also reduces defensive code duplication). ThisThing does this by giving you the ability to update behaviors at various points in the lifecycle of a function, even after it has been defined, by turning functions "inside out" and treating its parts as data."

phronmophobic16:03:22

That's more like a sales pitch. I was thinking more like https://youtu.be/fTtnx1AAJ-c?si=vmaLXEP70WYYzxPK&amp;t=1899

john17:03:34

Yeah I'll meditate on that

john17:03:21

In some ways, it allows you to treat functions like macros, because by turning functions into data, functions can act on them in a similar way to macros, allowing you to sneak into the scope of a fn, behind its closure wall, and to tweak that data before it executes. So in that way, it's so general that it's hard to choose just one "problem statement." Like macros, there's lots of reasons for them and what different kinds of problems they solve.

phronmophobic17:03:15

So in that way, it's so general that it's hard to choose just one "problem statement." One way to deal with that is to make the problem smaller. One way to make the problem smaller is to specialize it (ie. reduce the scope to a smaller problem). At some later point, if your solution addresses the smaller problem, you can later try to generalize the approach. Generalizing is easier because you've already learned more about the problem and solution spaces while working on the smaller problem. (or if the approach didn't work on the smaller problem, you've saved a bunch of time and effort).

phronmophobic17:03:24

For example, I think limiting the scope to just focus on describing UI components might be useful, which was the original goal in the first place.

john17:03:31

Yeah true. I'm going to tackle rewriting comp.el in this new formalism soon. Maybe tonight

phronmophobic17:03:40

I still think it would be helpful to try and write down some sort of problem statement for that purpose as well. Writing things down is unreasonably effective in my experience.

john17:03:01

Yeah, having this conversation has clarified a lot of things for me

john17:03:31

Just trying to explain my understanding of it

phronmophobic17:03:00

I wrote a design series for one of my libraries thinking it would help other people, but I'm pretty sure the biggest improvement was in my own thinking. I found so many design issues and jargon issues in my project by just writing things down.

john17:03:18

Did you have those design docs public?

john17:03:49

Or a link to a design doc you think is well formed

phronmophobic17:03:19

That Design In Practice youtube link is a much better resource.

john17:03:00

Those are some very good docs

phronmophobic17:03:32

That post is from 3 years ago! šŸ‘“

phronmophobic18:04:14

I think you have to get a better grasp of what these things are, their tradeoffs, etc, before you can really pick a good name.

Ludger Solbach09:04:29

I'm with @smith.adriane regarding the relevance of the name. Once it is used it will stick and to be used it has to be good and transport a meaning of the concepts behind it. That's with e.g. FP, OOP, AOP.

Ludger Solbach09:04:26

As far as I understand it, your data driven approach to function definitions/implementations is providing extension points to change the behaviour (effect) of the function when called, depending on the data provided with the map. And because it is just a map, it can be manipulated.

Ludger Solbach09:04:38

To define and transport the meaning, maybe you should answer a few questions, like: What are the key differences/advantages of this approach compared to function redefinitions (e.g. memoize), which also can add an extension effect to the implementation? What's the difference of this approach with providing strategies via function parameters (e.g. with partial)?

Ludger Solbach09:04:21

IMHO your approach is more dynamic, because you're treating the extensions and implementations as data which can be changed and composed at runtime with the tools of data manipulation.

Ludger Solbach09:04:31

What could you do with this approach? Some classic use cases for AOP are instrumentation (e.g. tracing, change detection, validation). More interesting would be some examples of how to extend the functionality of an application without changing the existing code. An example given in "AOSD with Use Cases", which I referenced earlier, is to extend a hotel reservation system with a waiting list feature as an extension, without changing the code for the reservation functionality.

john18:04:49

What about "open functions," so as to contrast with functions usually "closing" over their implementations

john18:04:23

Open Functional Programming

john18:04:29

Not sure if I'd name the library open-functions though

john19:04:18

I don't know. If y'all think "affect" is too tacky or off the mark, I personally think "data functions" or "open functions" sorta conveys the meaning of the idea. I can't think of many other analogies. I think open relates to partial, in the sense that a partial is partially open and partially closed

phronmophobic19:04:46

Like I said earlier, I think you have to have a better understanding of it is before you can choose a good name.

john19:04:18

Well what do you think it is? Other than "open functions" or "data functions"?

john19:04:52

I think your idea of decoupling impl and data type instance is different than this one and is more-like a superset of data/open function functionality

phronmophobic19:04:20

I don't know. My approach would be to do more design work: ā€¢ find and read prior art ā€¢ create a table that differentiates it from different approaches to similar problems

phronmophobic19:04:44

Even just organizing the discussion so far into summarized form would probably help.

john19:04:23

Yeah but see I'll go through all this work, writing up a design doc that explains how this thing relates to some word in the dictionary, and then y'all'll be like, "No, we hate that name too!" šŸ˜†

john19:04:03

The name of the things and the name of the library can be two different things too

john20:04:58

So Google's Gemini thinks we should either call them transformers or coin a new term for it called fluxors šŸ˜‚

john20:04:19

I think both of those are pretty good

john20:04:54

Any objections to transformer?

john20:04:16

Well, it'd be hard to google for these days

john20:04:30

But I mean, you literally are transforming one function into another function via a series of data transformations

john22:04:50

Okay, how about we call these things transformers and my lib can be called Deft because it provides a deft macro for creating a root transformer. And if anyone wants to make a different transformers library they can call it something else, but we can all call them transformers. Yeah?

john22:04:20

I want to only get a reference prototype built with deft, for the purposes of sussing out good semantics for the function composition stuff. But if it's a good idea I'd hope others would make better, different or more performant implementations.

john14:04:40

So it seems to check out with mathematical language: "In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure of the categories involved. Hence, a natural transformation can be considered to be a 'morphism of functors'."

šŸ‘ 2
phronmophobic16:04:32

Very cool. One thing to watch out for is using a word with precise meaning, but using it incorrectly. My category theory is too weak to tell if that's the case here.

phronmophobic16:04:16

But yea, the best case is if you do find an existing idea that you can build on top of.

Ludger Solbach21:04:27

then people can connect their prior knowledge to your ideas.

john15:04:38

Oh wow, so it's already sorta a thing: "A monad transformer makes a new monad out of an existing monad, such that computations of the old monad may be embedded in the new one. To construct a monad with a desired set of features, one typically starts with a base monad, such as Identity, [] or IO, and applies a sequence of monad transformers." https://hackage.haskell.org/package/transformers-0.6.1.1/docs/Control-Monad-Trans-Class.html I even had a variadic identity function as the base :op fn in my impl. Very similar in spirit.

john15:04:14

I'll study up on that. I don't understand monads very well yet but I should be able to figure it out. I'd also be interested in hearing from Clojurists what API they'd want from function transformer in Clojure and what features from monad transformers Haskeller Clojurist's would want to bring over to Clojure-land.

john15:04:58

Is there a "Clojure for Haskellers/monad rangers" channel here anyone know of?

john15:04:13

And what does lift mean in the Control.Monad.Trans.Class docs and in Haskell in general?

john15:04:37

Sounds like type transformations that we don't have to worry about

john15:04:06

I'm also trying to think through what a transducer transformation api might look like :thinking_face:

john15:04:44

I'm working on docs right now and coming up with examples. I've got a dispatch pattern similar to multi methods but only for ancestors. Then a stateful version to show how you could use state to implement full blown multi methods. I have the mocking examples. The todomvc examples. I'd like a few more use cases to bang on the api before releasing an alpha of Deft

john15:04:36

haskell transformer docs seem to have some interesting use cases to draw inspiration from

john15:04:13

A library called https://github.com/jacekschae/conduit seems to have a simplified api for building transducers. Might be able to draw inspiration there for a simple, declarative, data-driven description of transducers, that can easily be morphed/transformed into other transducers

john15:04:52

And I wonder how many idea's from Sussman's "Layering" talk could be implemented in Clojure transformers: https://www.youtube.com/watch?v=EbzQg7R2pYU

john16:04:10

Yeah, def sympathizing with the transformer stack wrangling here https://youtu.be/8t8fjkISjus?si=KXr7gztAgUtoNJWM

john18:04:48

Interesting take on (pdf)https://drops.dagstuhl.de/storage/01oasics/oasics-vol076-plateau2019/OASIcs.PLATEAU.2019.3/OASIcs.PLATEAU.2019.3.pdf for ocaml I believe, where they say: > Speaking from the personal experience of implementing thousands of line of program transformations, it is difficult to maintain a declarative, consistent, and reusable pattern of implementation that scales well for even a few dozen transformations. To resolve these issues, we propose a domain-specific language for program transformations that can operate on three different levels of abstraction: the concrete syntax tree, the abstract syntax tree, and the generalized syntax tree. The concrete syntax tree and abstract syntax tree are familiar: the former including rigid details such as the exact whitespace of the code to be transformed and the latter including only the underlying structure that is fed to, for example, the evaluator of the language. The generalized syntax tree operates at an even higher level than the abstract syntax tree, and allows for an even more declarative approach to specifying program transformations. So the tool they're talking about still operates at compile time, patching things transparently like AOP. But similar to what they're saying there, I think of these function transformers as a higher level, but still intermediate, representation of a function, above the token level and AST level. I believe that's how Elixer's macros work - via the AST. But if you're just using the higher level, intermediate model for the function, it can be a kind of macro that can be defined at runtime and operate on functions made of data at runtime. It appears that's how Haskell's transformers lib works, updating monads as runtime data.

john18:04:46

In the Haskell Transformers sense, lisp macros are arguably transformers at the token level. These "function transformers" work at a higher level semantic abstraction for functions. As a result, you can manipulate them at runtime, even without compile/eval or access to vars in static environments like cljs or babashka

Ludger Solbach22:04:05

cool that there's some input and prior art to draw from. It also validates your ideas in a way. For compile time AST transformations we have macros, of course. But if I understand your approach correctly, it is a form of runtime transformation which should be more dynamic. Maybe parts of the API could use macros, but in general, a data driven API is preferable, because it's more dynamic and composable.

john22:04:37

Yeah, making "parts of functions as data" basically gives plain old functions macro power over those "functions as data"

john18:03:34

So hopefully that simpler implementation makes it easier to assess the idea. What do y'all think? Good idea? Bad idea? Pros and cons?

john06:03:24
replied to a thread:I'm interested in folk's opinion on the ideas in https://github.com/johnmn3/af.fect I'm still not 100% that it's a good idea, or if it was already invented and forgotten in the lisp community decades ago, as it's a pretty simple idea. The idea keeps coming back to me though, cause I like to test LLMs by asking them to make a version of affect by asking it to: _"create a function that takes a function (the operator) and return a function that either applies its arguments to the operator function or takes a map that can redefine the data passed to and returned from the operator function which returns a new function that can do the same thing as its parent function."_ Some of them do a pretty good job! I like the problem because it's almost like a quine of some sort. I spruced up one of the answers to create a more simplified version of affect: (defn mk-static-effect [{:as ctx :keys [static-effect pre af post merge-fn]} args] (or static-effect (let [res (post (af (pre (assoc ctx :args args)))) ctx (if res ((or merge-fn merge) ctx res) ctx)] (fn static-effect [&amp; args] (apply (:ef ctx) args))))) (defn mk-effect [{:as ctx :keys [pre af post args merge-fn effect] :or {merge-fn merge}}] (or effect (fn effect [&amp; args] (let [res (post (af (pre (assoc ctx :args args)))) ctx (if res (merge-fn ctx res) ctx) new-args (:args ctx []) ef (:ef ctx identity)] (apply ef new-args))))) (declare extend-fn) (defn mk-affect [{:as ctx :keys [affect]}] (or affect (let [init (:init ctx identity) ctx (init ctx)] (fn affect [&amp; args] (if-not (some-&gt; args first meta (contains? :a/f)) (apply (mk-effect ctx) args) (if (some-&gt; args first :dump?) ctx (apply extend-fn ctx args))))))) (defn ctxify [ctx-or-fn] (if-not (map? ctx-or-fn) {:ef ctx-or-fn} ctx-or-fn)) (defn comp-key [k ctx ctxs &amp; [catch-fns?]] (let [old-afn (k ctx identity) afn (if (fn? (first ctxs)) (if-not catch-fns? identity (first ctxs)) (k (first ctxs) identity)) afns (-&gt;&gt; ctxs rest (mapv k) (filter identity) (concat [old-afn afn]) reverse (apply comp))] afns)) (defn merge-ctxs [ctx ctxs] (let [merge-fn (-&gt; ctx :merge-fn (or merge)) ctx (apply merge-fn ctx (filter map? ctxs))] ctx)) (defn mk-fn-extender [ctx ctxs] (let [ctx (ctxify ctx) init (comp-key :init ctx ctxs) pre (comp-key :pre ctx ctxs) af (comp-key :af ctx ctxs true) post (comp-key :post ctx ctxs) new-ctx (merge-ctxs ctx ctxs)] (mk-affect (assoc new-ctx :init init :pre pre :af af :post post)))) (defn extend-fn [ctx &amp; ctxs] (if (:freeze? (first ctxs)) (mk-static-effect ctx ctxs) (mk-fn-extender ctx ctxs))) (def add (extend-fn +)) (def add-and-inc (add ^:a/f #(assoc % :ef (fn [&amp; args] (-&gt;&gt; args (apply (:ef %)) inc))))) (add-and-inc 2 2) ;=&gt; 5

Posting this back to the channel. With the help of @smith.adriane, we've winnowed it down into more data oriented approach:

(defn run-af [env & args]
  (let [afs (:af env [])
        op (:op env (fn [& op-args]
                      (case (count op-args)
                        0 nil
                        1 (first op-args)
                        op-args)))
        new-env (->> afs (reduce (fn [arg af] (af arg)) env))
        ins (:in new-env [])
        new-args (->> ins (reduce (fn [arg in] (apply in arg)) args))
        res (apply op new-args)
        outs (:out new-env [])
        out-res (->> outs (reduce (fn [arg out] (out arg)) res))
        fin-env (assoc new-env :res out-res)
        fins (:finally fin-env [])]
    (->> fins (reduce (fn [arg fin] (fin arg)) fin-env))
    out-res))

(defmacro daf [afname ctx]
  `(do (defrecord ~(symbol (str ">" afname)) []
         clojure.lang.IFn
         ~@(->> (range 22)
                (map (fn [n]
                       (let [args (for [i (range n)] (symbol (str "arg" i)))]
                         (if (empty? args)
                           `(~'invoke [this#]
                                      (run-af this#))
                           `(~'invoke [this# ~@args]
                                      (run-af this# ~@args)))))))
         (~'applyTo [this# args#]
           (apply run-af this# args#)))
       (def ~afname
         (~(symbol (str "map->>" afname)) ~ctx))))

(daf add {:op +})

(def add-and-inc
  (-> add
      (update :out conj inc)))

(add-and-inc 2 2) ;=> 5