This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-03-21
Channels
- # announcements (1)
- # architecture (392)
- # babashka (3)
- # beginners (1)
- # calva (2)
- # cider (1)
- # clojure (30)
- # clojure-denmark (2)
- # clojure-dev (9)
- # clojure-europe (13)
- # clojure-italy (2)
- # clojure-japan (17)
- # clojure-korea (8)
- # clojure-nl (1)
- # clojure-norway (74)
- # clojure-uk (3)
- # clojurescript (6)
- # code-reviews (8)
- # conjure (1)
- # data-science (1)
- # datascript (7)
- # datomic (1)
- # fulcro (1)
- # graalvm (9)
- # humbleui (3)
- # hyperfiddle (11)
- # leiningen (4)
- # lsp (7)
- # malli (7)
- # off-topic (57)
- # other-languages (9)
- # overtone (7)
- # shadow-cljs (30)
- # sql (15)
- # squint (3)
- # timbre (3)
- # vim (6)
I'm interested in folk's opinion on the ideas in https://github.com/johnmn3/af.fect I'm still not 100% that it's a good idea, or if it was already invented and forgotten in the lisp community decades ago, as it's a pretty simple idea. The idea keeps coming back to me though, cause I like to test LLMs by asking them to make a version of affect by asking it to: "create a function that takes a function (the operator) and return a function that either applies its arguments to the operator function or takes a map that can redefine the data passed to and returned from the operator function which returns a new function that can do the same thing as its parent function." Some of them do a pretty good job! I like the problem because it's almost like a quine of some sort. I spruced up one of the answers to create a more simplified version of affect:
(defn mk-static-effect [{:as ctx :keys [static-effect pre af post merge-fn]} args]
(or static-effect
(let [res (post (af (pre (assoc ctx :args args))))
ctx (if res ((or merge-fn merge) ctx res) ctx)]
(fn static-effect [& args]
(apply (:ef ctx) args)))))
(defn mk-effect [{:as ctx :keys [pre af post args merge-fn effect]
:or {merge-fn merge}}]
(or effect
(fn effect [& args]
(let [res (post (af (pre (assoc ctx :args args))))
ctx (if res (merge-fn ctx res) ctx)
new-args (:args ctx [])
ef (:ef ctx identity)]
(apply ef new-args)))))
(declare extend-fn)
(defn mk-affect [{:as ctx :keys [affect]}]
(or affect
(let [init (:init ctx identity)
ctx (init ctx)]
(fn affect [& args]
(if-not (some-> args first meta (contains? :a/f))
(apply (mk-effect ctx) args)
(if (some-> args first :dump?)
ctx
(apply extend-fn ctx args)))))))
(defn ctxify [ctx-or-fn]
(if-not (map? ctx-or-fn)
{:ef ctx-or-fn}
ctx-or-fn))
(defn comp-key [k ctx ctxs & [catch-fns?]]
(let [old-afn (k ctx identity)
afn (if (fn? (first ctxs))
(if-not catch-fns?
identity
(first ctxs))
(k (first ctxs) identity))
afns (->> ctxs
rest
(mapv k)
(filter identity)
(concat [old-afn afn])
reverse
(apply comp))]
afns))
(defn merge-ctxs [ctx ctxs]
(let [merge-fn (-> ctx :merge-fn (or merge))
ctx (apply merge-fn ctx (filter map? ctxs))]
ctx))
(defn mk-fn-extender [ctx ctxs]
(let [ctx (ctxify ctx)
init (comp-key :init ctx ctxs)
pre (comp-key :pre ctx ctxs)
af (comp-key :af ctx ctxs true)
post (comp-key :post ctx ctxs)
new-ctx (merge-ctxs ctx ctxs)]
(mk-affect (assoc new-ctx :init init :pre pre :af af :post post))))
(defn extend-fn [ctx & ctxs]
(if (:freeze? (first ctxs))
(mk-static-effect ctx ctxs)
(mk-fn-extender ctx ctxs)))
(def add (extend-fn +))
(def add-and-inc
(add
^:a/f
#(assoc % :ef (fn [& args]
(->> args (apply (:ef %)) inc)))))
(add-and-inc 2 2) ;=> 5
It's not totally clear what the ultimate goal is. Just browsing through the implementation I see a couple of things:
I'm pretty skeptical of designs where every input is a "maybe this or that". If every value could be a wrapped or unwrapped value, then you end up with a combinatorial explosion of code paths that can be very hard to read, reason about, or debug. Further, normal functions no longer work and must be converted into a maybe this or that oriented function which reduces reusability.
It's hard to tell, but I think you've reinvented the monad.
> if-not (some-> args first meta (contains? :a/f))
Metadata on functions is undefined, https://ask.clojure.org/index.php/11514/functions-with-metadata-can-not-take-more-than-20-arguments?show=11515#a11515
Further, :a/f
seems to be data and not metadata (ie. mk-affect
does not have value semantics since equal inputs do not have equal outputs).
Using homophones (eg. affect
and effect
) for similar, but different concepts is asking for trouble.
This style also reminds of defadvice
from elisp. Maybe there's some inspiration to draw from there, https://www.gnu.org/software/emacs/manual/html_node/elisp/Advising-Functions.html
It also seems similar to https://en.wikipedia.org/wiki/Aspect-oriented_programming
The goal of it is to allow implementation reuse, rather than having to reimplement everything if a change is required half-way up the composition stack of a given function. It's a hard to describe problem but it was one I faced when building lots of cljs widgets. Being able to branch off versions of existing implementations and change things normally hidden behind the encapsulation of the closure. The real pain came when trying to adapt a component that had an internal managed-component, abstracting away change handlers and state management for the developer, but requiring a reimplementation for every version involving different state management semantics. This led to an seemingly unnecessary amount of code duplication in the codebases. I made a simple experimental component lib with it here https://github.com/johnmn3/comp.el but never got around to creating the example where you have lots of code duplication, as the todolist example was too simple to show it. But that's the general point - to reduce duplication of concrete implementations that can otherwise be shared transparently
Fair point about the maybe-this-maybe-that. Transducers introduce this and that pathways that are pretty different, but I'm not sure what you mean by "normal functions no longer work" what does that mean? Callers might not know they're calling an extendable function and they don't need to know. And you can freeze the function so it can't be extended if necessary
> Transducers introduce this and that pathways that are pretty different
I would differentiate between branching (which may or may not be essential) and values that are "maybe this or that" which requires branching. Except for reduced?
, I don't think transducers have an "maybe this or that" values.
What's the difference between an extensible function and a wrapped function?
Interesting point about the metadata. There's a few other ways to do it, like having a special parameter that switches the mode when its passed in.
> There's a few other ways to do it, like having a special parameter that switches the mode when its passed in.
Protocols are often a good choice. They're extensible and it removes branching in the implementation (ie (affect x)
)
I just mean the extra arity on like map and reduce, where the transducer version is more open
Right, you could have something that uses deftype
that implemented IFn
, but is also usable as data via get
or assoc
.
Yeah, that's probably best, but the impls diverged moreso between the clj and cljs versions. Just using a parameter passed in is good enough to show the idea of how it works though. The best solution would def use protocols
I think this could also be implemented as a monad. Rather than having every operation take a "maybe this or that", you just have the return
and bind
operations that bridges the gap with normal functions.
I'm not sure I get what you mean by "this or that" actually. To the consumer of the function, the "that" is an implementation detail they never have to know about
Yea, could just be a misunderstanding. I was just looking at the implementation and every function starts with:
(or this
(that ....))
oh, that's just so implementers can whole-cloth drop in entirely different definitions of what makes up the machinery of the thing
Most wouldn't ever use that low level feature. Idea there is that you don't need a version two of extend-fn. Just pass in the version two part
It's like turning a function inside out, because it can be called from the inside, by the params being passed in
But maybe that's juts a gimmick... It'd work just as well always calling "extend-fn" when you want to extend the fn
Or you could just use a map
The only interesting aspect is that you don't have to require in any extend-fn lib because it's already built into the fn your require in
extend via metadata might also be an option, https://clojure.org/reference/protocols#_extend_via_metadata
> A map with fn impled on it? It's hard to discuss in the abstract, but I would probably just keep the impl separate. The program manipulates data up until the very end and then there's final transformation that turns the data into a machine/implementation. And there can be multiple data -> machine options available
Eg. I have a big datastructure that represents my blog. I keep transforming and accreting. At the end, I use the data to spit out a desktop website, a mobile website, and an app, or whatever.
Maybe I'm having issues, so I use the same datastructure and spit out a website with extra instrumentation and logging.
That's one possible direction to go. I think there's existing problems it solves though. Not a lot maybe, just some niche situations. Pretty much what OO was invented for building GUIs, you get some of that impl sharing here
I also think OO is bad for building GUIs
And some of the problems with object encapsulation and data hiding are also there with closures
But there's definitely an issue I've seen with how we do things in clojure where we end up duplicating code because that's just the easiest way to solve a problem because there's no impl reuse in clojure in that way
Maybe I'm actually following what you mean by impl reuse?
Or do you have an example where code is duplicated unnecessarily?
Like, we'll make functions all the time, that are just a composition of 5 or 10 other functions, right?
We might wrap function 8 to make function 9. But what if function 9 needs function 4 to behave differently, without having to reimpl functions 5 through 8?
With this, the impl of fn 4 can be exposed, so that 6 8 or 23 can hot swap it out for something else
> But what if function 9 needs function 4 to behave differently That seems like a bad problem to have. Ideally, functions are decoupled and composed together.
To me, there's a difference between solving coupling by making it easier and solving coupling by taking things apart and decoupling them. I do think clojure tends to actively avoid making it easier to couple things together.
I don't think so.
Well, you know, you end up in a situation and you're like, "dang it, I wish I could get to the data hidden behind that closure boundary, hmmm"
Going back to the blog example. To me, that means you stopped working with data prematurely.
You wouldn't actually use this for that though. You'd just use a map for that, right? This just wouldn't be good for that I think
Here's another way to look at this. It's like you have interceptor chains on the inputs and the outputs of your function. You can extend the behavior of that function, creating a new version of it, by augmenting the interceptor chains before and after the fn
Maybe I'm just a weirdo, but I would. The components in my UI library are maps (defrecords, not literals).
> You can extend the behavior of that function, creating a new version of it I also think this is the wrong perspective. It's not a new version of the function. It's a different function.
If it takes a different type of thing, has a different behavior, or returns a different type of thing, then it's not a version of the old function, it's a different function.
I think these subtle distinctions are actually important from a design aspect when building larger applications. I would say "parent" function and "re-implement" are tricky, not simple, and difficult to reason about. If at all possible, I would prefer using regular functions and "reuse" over* "reimplement".
I guess it's true that I don't think you should care about the insides of functions.
It seems like it would be helpful to have a concrete example. If you don't think the blog example is a good one, maybe it would be helpful to think about another or even just say why the blog example isn't applicable in order to brainstorm another.
I'm also happy to let bygones be bygones if you don't think this discussion is helpful. I admit I can get carried away sometimes.
Nah, I love that you're challenging the idea! I'm not convinced about it myself. I just have this strong suspicion and it keeps coming back to me. Maybe I'm just attracted to the simple quine-like nature of the solution
So, in the readme, you can see this example:
(def el
(af
{:as ::el :with [add-props classes]
:env-op form-1})) ; <- env-op also passes the environment to the op
(def grid
(el
{:as ::grid
:props {:comp mui-grid/grid}}))
(def container
(grid
{:as ::container
:props {:container true}}))
(def item
(grid
{:as ::item
:props {:item true}}))
(def btn
(el
{:as ::btn
:props {:model :button
:comp mui-grid/button}}))
(def input
(el
{:as ::input :with [hide-required use-state validations]
:props {:comp mui-grid/text-field}}))
(def form-input
(input
{:as ::form-input
:props {:style {:width "100%"
:padding 5}}}))
(def email-input
(form-input
{:as ::email-input
:props {:label "Email"
:placeholder ""
:helper-text "validating on blur"}
:validate-on-blur? true
:valid [#(<= 4 (count %)) "must be at least 4 characters"
#(= "@" (some #{"@"} %)) "must contain an @ symbol"
#(= "." (some #{"."} %)) "must contain a domain name (eg \"\")"]}))
(def password ; <- abstract
(form-input
{:as ::password-abstract
:props {:label "Password"
:type :password}
:valid [#(<= 8 (count %)) "must be longer than 8 characters"]}))
(def password-input
(password
{:as ::password-input
:props {:validate-on-blur? true}}))
(def second-password-input
(password
{:as ::second-password-input :with submission
:valid [#(= % (password-input :state))
"passwords must be equal"]
:fields [email-input password-input second-password-input]
:props {:on-enter (fn [{:as _env :keys [fields]}]
(ajax-thing/submit-fields fields))}}))
(def submit-btn
(btn
{:as ::submit-btn :with submission
:fields [email-input password-input second-password-input]
:props {:variant "contained"
:color "primary"
:on-click (fn [{:as _env :keys [fields]}]
(ajax-thing/submit-fields fields))}}))
#_...impl
(defn form [{:as props}]
[container
{:direction "row"
:justify "center"}
[item {:style {:width "100%"}}
[container {:direction :column
:spacing 2
:style {:padding 50
:width "100%"}}
[item [email-input props]]
[item [password-input props]]
[item [second-password-input props]]
[container {:direction :row
:style {:margin 10
:padding 10}}
[item {:xs 8}]
[item {:xs 4}
[submit-btn props
"Submit"]]]]]])
Notice how passwords behaviors and attributes accrete on to the form-input, and then password-input and second-password-input accrete their custom behaviors onto password
password input, if necessary, in it's impl, can change the width and padding specified in form-input
I'm trying to figure out how this is different than just doing that with maps?
Well, you could store everything as maps at the top level and have some indirection thing turning them into things that are functions that derive from one another's maps, that'd work too
That makes sense. That's kind of what I was thinking. Just use maps/data. To produce the final artifact, you take the giant datastructure and turn it into the "machine" that runs your application.
At any point along the way, you can accrete cross cutting concerns like logging, instrumentation, apply optimizations, and otherwise.
And you can have multiple choices of how to spit out prod app, debug app, internal tool, debugger, etc. from the data.
For me, the important part is to document the data specification (the semantics of properties and which values are valid) rather than trying to treat intermediate data as functions.
Like this:
(def add (extend-fn +))
(def bad-key
(add
^:a/f
{:init (fn [ctx]
(println :init ctx)
(when (-> ctx (contains? :secret))
(throw (js/Error. "No secrets allowed")))
ctx)}))
(bad-key 1 2) ;=> 3
(def add-and-inc
(bad-key
^:a/f
{:secret :sauce
:af (fn [{:as ctx :keys [ef]}]
(assoc ctx :ef (fn [& args]
(->> args (apply ef) inc))))})) ;=> error: No secrets allowed
So that gets caught at compile timeThat's why I brought in the "affects" idea, trying to differentiate between compile and run time. Though in this impl only init is running exclusively at compile time and pre, op and post are all running at runtime
I'm not sure "compile time" and "run time" make sense without a specific environment. I think just having separate validations that can be applied for specific uses makes more sense.
ie. dev check, staging check, prod check, foo-company-pre-checkin-check
This implementation seems "operation focused" rather than data-oriented.
I might be using the wrong terms here too. But the point there was that the error there will be thrown at compile time and add-and-inc will never get to be defined.
Whereas, sticking the throw
in an pre or post would not throw until the function was called, potentially, depending on impl
(def bad-key
{:init (fn [ctx]
(println :init ctx)
(when (-> ctx (contains? :secret))
(throw (js/Error. "No secrets allowed")))
ctx)
:op +})
(invoke bad-key 1 2)
(def add-and-inc
(merge
bad-key
{:secret :sauce
:af (fn [{:as ctx :keys [ef]}]
(assoc ctx :ef (fn [& args]
(->> args (apply ef) inc))))}
)) ;=> error: No secrets allowed
Here's some pseudo code for what I imagine a more data oriented api might look like.Well, merge wouldn't produce that error, right? But I get your point about the data orientation
That's all this is, taking care of the special-invoke and special-merge, for data defined functions
well, merge+validate
, maybe
In the impls I've been playing with, we comp together functions of like keys for some of the keys
maybe because the secrets check isn't the best example, but if you did want something like that, you could have some special helpers for merge+validate
for sugar. I'm not sure I'm totally sold.
yea, clojure.core/merge
might not be enough and you might want a special cool.lib/merge
or cool.lib/combine
or whatever is actually a good name for it.
I would still want validation to be available separately, even if it's more idiomatic for your use case to combine them.
For some stuff you'll want a deeper merge too, but you can define those within the data as well
Yea. The key idea is that it's just a data operation which takes data and returns data.
Iike, for my components, I'm merging the style maps together, so one :style key doesn't clobber the other
See, you don't need merge+validate
if you have some merge-magic
that allows you to add validation behavior to the thing downstream
So, an interesting question about this thing is, what is the minimal impl that allows you to build an extensibility system like this, where you can accrete in behaviors like validation after the fact. The above is one of the more minimal versions I've come up with that has a half decent api
> See, you don't need merge+validate if you have some merge-magic that allows you to add validation behavior to the thing downstream
That's the thing. I'm not sold on needing the validation at every definition anyway. I definitely don't want merge
to magically transmogrify depending on some config.
That's moving away from data orientation to operation orientation and I don't think it helps.
I don't think you can say whether data is valid outside of a particular context. Defining data should usually be contextless (ie. not coupled to a specific use case).
lol I hear you. It'd still be functional and immutable, but yeah it sounds like it could get hairy
I've worked with those kinds of systems where you need to reconfigure your environment to get things to work together. It then becomes difficult to reuse the same code in a new context like staging, debugging, prototyping, benchmarking.
> So, an interesting question about this thing is, what is the minimal impl that allows you to build an extensibility system like this, where you can accrete in behaviors like validation after the fact. Just have the operations you want a la carte. You can then take the simple stuff and compose it with those ops when it's convenient.
It's super easy to setup your workflow so validation happens on very eval/file change/checkin/git push.
Or not. if you're prototyping.
well, that example was about form validations, but yeah. Like you said, you could hook in any instrumentation you want
Interesting discussion!
I always learn something.
I'm sure new ideas will come up later after a nap.
Yeah, it's helpful to get some feedback on these weird ideas sometimes, to see if they have any merit
It's possible that, even in the gui situation I found this pattern useful for, there's a better way still for that problem and I just missed it. But I still have this suspicion it might be useful in one of those nitches. I'll think about making it less implicit though and looking more like traditional data orientation, rather than breaking the closure boundary rules. That definitely causes a knee jerk reaction and is hard to swallow lol
Oh, by validation I thought you were referring to the form validation example. But yeah, I agree, and if you store the function maps as just top level maps you could just spec them at compile time. We already have solutions for most of these problems - you def don't need this just to do that. I was just using that to show an example where you can do stuff inside one of these function maps at the time the function instance is instantiated vs when it is called
Okay, so here's another impl that keeps maps at the top level:
(defn mk-effect [{:as ctx :keys [pre af post merge-fn effect]
:or {pre identity af identity post identity
merge-fn merge}}
& args]
(or effect
(let [res (post (af (pre (assoc ctx :args args))))
ctx (if res (merge-fn ctx res) ctx)
new-args (:args ctx [])
ef (:ef ctx identity)]
(apply ef new-args))))
(defn ctxify [ctx-or-fn]
(if-not (map? ctx-or-fn)
{:ef ctx-or-fn}
ctx-or-fn))
(defn comp-key [k ctx ctxs & [catch-fns?]]
(let [old-afn (k ctx identity)
afn (if (fn? (first ctxs))
(if-not catch-fns?
identity
(first ctxs))
(k (first ctxs) identity))
afns (->> ctxs
rest
(mapv k)
(filter identity)
(concat [old-afn afn])
reverse
(apply comp))]
afns))
(defn merge-ctxs [ctx ctxs]
(let [merge-fn (-> ctx :merge-fn (or merge))
ctx (apply merge-fn ctx (filter map? ctxs))]
ctx))
(defn mk-fn-extender [ctx & [ctxs]]
(let [ctx (ctxify ctx)
init (comp-key :init ctx ctxs)
pre (comp-key :pre ctx ctxs)
af (comp-key :af ctx ctxs true)
post (comp-key :post ctx ctxs)
new-ctx (-> ctx
(merge-ctxs ctxs)
(assoc :init init :pre pre :af af :post post))]
new-ctx))
(defn extend-fn-map [ctx & ctxs]
(when-let [init (:init ctx)]
(mapv init ctxs))
(mk-fn-extender ctx ctxs))
(defn invoke-fn-map [fn-map & args]
(apply mk-effect fn-map args))
So then you can do the same thing with extend-fn-map
and invoke-fn-map
like:
(def add
(extend-fn-map {:ef +}))
;=> {:ef Ę :init c ...
(def public-add
(extend-fn-map
add
{:init (fn [ctx]
(when (-> ctx (contains? :secret))
(throw (js/Error. "No secrets allowed")))
ctx)}))
;=> {:ef Ę :init c ...
(invoke-fn-map public-add 2 3)
;=> 5
(def add-and-inc
(extend-fn-map
public-add
{:secret :sauce
:af (fn [{:as ctx :keys [ef]}]
(assoc ctx :ef (fn [& args]
(->> args (apply ef) inc))))}))
;=> error: No secrets allowed
š I think as the approach becomes more data oriented, the implementation matters less and the data specification and semantics become more important.
Iām still not sure I totally understand the intended usage. My intuition is that you still want a way to separate data definitions from validation.
It's kinda like modeling functions as data and then manipulating them like macros but with functions, for the purposes of sharing implementation data between functions even after they're defined
No so much about modeling the world or problem domains, just modeling functions and their various phases, inputs, outputs, construction, finally, validations, whatever properties you want, but about the behaviors of functions
> for the purposes of sharing implementation data between functions even after they're defined that sounds like something you specifically want to avoid Itās hard to tell if youāre trying to model workflows, data pipelines, or something else
IMO, functions shouldnāt have phases, but phases may have functions
I think you might not need to avoid it when you're dealing with intrinsically hierarchical composition of a large number of functions
But sometimes I think we just might genuinely want impl sharing. Do you really think a case can be made that impl sharing is never good?
But there seems to be a vacuum for that niche, for when it is actually good (unless it's never actually good!)
> Do you really think a case can be made that impl sharing is never good? I don't. I'm not sure if it's good in this case and I'm also not sure this is a good technique if it is useful.
A more detailed rationale or example use case would be needed for me to give any more specific, useful feedback. Right now, I only have vague intuitions that the approach could be either more general or simplified.
Again, I didn't think this example ended up doing the concept justice, because todomvc doesn't require a large hierarchy of components, but here you can see an example. Here you can see new-todo derives from todo-input: https://github.com/johnmn3/comp.el/blob/main/ex/src/todomvc/views/comps.cljs#L56
(def todo-input
(comp/raw-input
{:as ::todo-input :with [styled/todo-input a/void-todo]
:props/void :af-state
:props/ef (fn [{:keys [on-save on-stop af-state]}]
(let [stop #(do (reset! af-state "")
(when on-stop (on-stop)))
save #(do (on-save (some-> af-state deref str str/trim))
(stop))]
{:auto-focus true
:on-blur save
:value (some-> af-state deref)
:on-change (fn [ev] (reset! af-state (-> ev .-target .-value)))
:on-key-down #(case (.-which %)
13 (save)
27 (stop)
nil)}))}))
(def new-todo
(todo-input
{:as ::new-todo :with styled/new-todo
:props {:placeholder "What needs to be done?"
:af-state (r/atom nil)
:on-save #(when (seq %)
(dispatch [:add-todo %]))}}))
It just mixes in some styles and properties to augment todo-input. Normally to do this, we'd just parameterize those attributes and just merge them in in the todo-input fn. But below new-todo you can see existing-todo needs to have special behaviors depending on state of values passed to it (editing, id and title).
(def existing-todo
(todo-input
{:as ::existing-todo :with styled/edit-todo
:props/af (fn [{:keys [editing]
{:keys [id title]} :todo}]
{:af-state (r/atom title)
:on-save #(if (seq %)
(dispatch [:save id %])
(dispatch [:delete-todo id]))
:on-stop #(reset! editing false)})}))
(Because the composition of todo-input is no longer locked behind a closure boundary, the :props/af
behavior of existing-todo is merged into todo-input. In normal composition, we would have to rewrite todo-input.)
Normally when building these components, we close over various aspects of their implementation. In the above example, when composing regular reagent-like component functions, we might design todo-input to handle new-todo's modifications by passing through props to todo-input. But then, suddenly, a customer wants to see existing-todos but, when we go to implement it, we realize that the updates it passes to todo-input need a reference to the editing status of the todo, which will require a reimplementation of todo-input, so in todo-input you can parameterize a function that takes the editing status, letting todo-input do the work of passing the editing status to existing-todo's passed in function that returns the new attributes.
But, unfortunately, we've added 10 more pages to the app since we defined todo-input and if we change the behavior of todo-input now then we'll need to do lots of testing to make sure we didn't break all these other downstream consumers so, instead, we just decide to make todo-input2, with this new ability that exiting-todo needs, simply because it's easiest to just copy and paste the code and just add the one change, only call it from the new functions and then call it a day.
Then you end up with all these vertical compositions with massive duplications between impls because it's just easier than fishing in new parameters through all the functions in it's composition hierarchy and then testing the whole world downstream of those changes. And if you do, you end up with todo-input taking on massive amounts of complexity to handle all possible demands of all possible callers, parameterizing more and more.
With this solution, existing-todo can add behavior to todo-input downstream - just the minimal amount it needs from todo-input to do its job, all without having to change the impl of todo-input.And in a lot of react code bases, we'll have a "managed-component" that will wrap an [:input ...
element.
We want lots of advanced features out of that state management component - form validations, various handlers, change/click/blur, default values, text parsing, text formatting - tons of features that keep growing, until finally your managed component function is hundreds of lines long, handling all possible requirements for all possible callers.
Then, suddenly, there's a new feature request and it's going to require a change to manged-component - quick, somebody get Joe, he's the last one that understood that hairball, etc etc.
With this scheme of function data extension, we can push some changes into the impl of managed-component downstream, via a minimal change, without having to reimplement managed-component and test its consumers or have to support version 1, 2 and 3 in parallel.
Another thing I didn't like about that todomvc example, I didn't use the compel framework's managed component. I used raw-input
so as to try to stay as true as possible to the way the reframe example todomvc app was doing state management. It would have looked a lot cleaner getting rid of all the local state atoms and letting the framework abstract state management away. I just wanted to keep the comparison about function composition and adding implicit state management would have been less an apples to apples comparison.
So there's two main benefits there: 1. Don't change your code, grow it: instead of changing existing mechanisms to accommodate new features, make new versions and leave the old ones there, and 2. Don't grow by duplication but by sharing: we could achieve pure growth by copying and pasting a new version on each new feature, but then we have to support multiple copies, fixing bugs in multiple places instead of one. If we share implementation, we can achieve change through growth without code duplication
What's the difference between this and normal function composition? eg.
(def add-and-inc (fn [& args] (inc (apply + args))))
(def new-f
(fn [& args]
(do
(before-stuff)
(let [new-args (modify-args)
result (apply old-f new-args)]
(after-stuff)
result))))
I'm also still hung up on how the behavior is overloaded. If it gets called with a "context", it returns a new function, otherwise, it applies the function. Is that right? How does it know if which "mode" it's being called in (ie. how does it know if the argument is a "context thing")?
add-and-inc closes over + and we can no longer update the semantics of + for someone who wants all the beautiful implementation work in add-and-inc but just wants something a little bit different from the way it uses +. Here, we can get in between the inc and the + in add-and-inc, as a user consuming add-and-inc, because add-and-inc carries a description of the history of its composition, which can be decomposed later and recomposed.
The most efficient thing would probably be a protocol/deftype thing like you said, to make the parameter checking fast. But it's really just carrying this impl history as metadata on the function or carrying it on the inside and dumping in from a special param. I did the later just because it's simpler, so as to get the idea across. But a Better Implementation would involve protocols I think.
Whey is invoking overloaded with both extension and normal usage?
And you don't have to have it be a parameter based signal for the mode, you can call it from the outside on every extension, (extend-fn foo bar ...
Yeah, it doesn't matter either way. In this implementation, the fns themselves are very much carrying their implementation history with them, so it just felt more natural to use invocation for both modes. It is a function of the function itself that your calling when you extend it.
For me, overloading the invocation is confusing.
It seems like these functions should also be more data-like: Eg.
(def +s
(af/fect
{:as ::+s
;; :with mocker
:op +
:ef (fn [{:keys [args]}]
{:args (apply strings->ints args)})
:mock [[1 "2" 3 4 "5" 6] 21]}))
(+s "1" 2)
;=> 3
(:as +s) ;; ::+s
(:mock +s) ;; [[1 "2" 3 4 "5" 6] 21]
(keys +s) ;; (:as :op :ef :mock :with)
((assoc +s
:op -)
5 4) ;; 1
It's kind of hard to understand what's going on because I don't really know what half of these attributes do like :af/props
:af
, :with
, etc.
Yeah, :with adds more ctxs from other affects, so you don't have to have single inheritance. You're basically mixing in the other affect contexts while composing their affects like you would with single inheritance path
:props/af is a special impl of af that affects the :props key, which is introduced by a props affect. It's for dealing with props html elements
That's all built in compel - the props affect - because it's not needed in the base af.fect lib. We just extend the behavior of af.fect from the outside
Do you have any other use cases other than UI components? It's hard to tell if this has general applications or just trying to manage the goofiness of UI programming.
I think it's maybe a 5% situation, not often, but it can probably be many different shapes
Most stuff done in libs doesn't require massive duplication. That's why it's a lib, right? It's more often in applications that live over time
When you have some massive managed-component
like function that sits half way up the composition hierarchy for like 50 or 100 other functions, down various branches, and changing it becomes a very sensitive operation
On the backend, if you have some api that has hundreds of endpoints and something in the stack is acting like a managed-component for all these paths, maybe
That's why I was asking for use cases outside of UI programming. I tend to think a lot of challenges in UI programming are fundamentally due to the underlying OO foundation which UI frameworks don't really address.
It's so easy to change code, we often are better off just adding the new feature to managed-component
The cool thing about pure functions is that the only thing you need to know about them is what arguments they require and return value to expect. Unfortunately, UI components are not even close to pure functions (even in clj/cljs).
But we are altering the semantic of an upstream function, which just feels wrong at first lol
It feels more variable because it can be changed. But the change still flows in the direction of impl. You're not actually changing the upstream function for other callers
It seems like in your todomvc example, the only attributes that are used are :af/prop
related (and :with
which also just uses :af/prop
related stuff)?
Right. It seems like most of this stuff isn't really about modifying args and return values, but dealing with props.
Yeah, in the context of the UI, most of what you're going to want to do is update those props and pass them around. We could have done all that stuff in an :af
but then we'd have to get the :props
out of the context every time we want to update the props. :props/af
just gives you the ability to focus your update to just the :props key within the context.
the todo example also doesn't seem to have any example of "modifying a component up the chain".
And it's super convenient that a downstream consumer of a component can update the props of an upstream component it's calling in an ad hoc basis, without affecting other callers
> And it's super convenient that a downstream consumer of a component can update the props of an upstream component it's calling in an ad hoc basis, without affecting other callers What's an example of this?
existing-todo
above. It's adding a props affect that happens upstream of it's parent's props effect
Can't that be done with regular function composition?
(def existing-todo
(fn [{:keys [props] :as m}]
(let [{:keys [editing]
{:keys [id title]} :todo} props]
(todo-input
(assoc m
:props
{:af-state (r/atom title)
:on-save #(if (seq %)
(dispatch [:save id %])
(dispatch [:delete-todo id]))
:on-stop #(reset! editing false)})))))
Yeah but now a caller of existing todo, if they want different semantics out of how todo-input works, they can write a new todo-input, but they're still going to have to write a new existing-todo too
Now we have todo-input1 and 2, and existing-todo2, all because special-existing-todo needed something special out of todo-input1 that it didn't have. Rewriting todo-input1 is one thing. But having to rewrite existing-todo too sucks. It shouldn't have to change just for special-existing-todo. special-existing-todo can simply point existing-todo to a different version of todo-input, for just its call
Ok, now I'm convinced that this could be simplified.
at least for this use case
For existing-todo, it seems like the problem is that it doesn't actually care about todo-input
If you designed todo-input such that everything it did was parameterized, you could feed it right down through the props, and do that for a chain of callers, letting downstream ones signal upstream ones just by passing that props context along
But when you find yourself parameterizing everything about some deep function, maybe it should just be a fully parameterizable function, built for doing that
Sorry for the slow response, but part of the trouble is that I'm not super familiar with comp.el which is built on af.fect and I'm not super familiar with re-frame which is built on reagent which sits on a mountain of other stuff.
so in the example, todo-input
is essentially just a text-input that saves on enter?
Yeah, it followed the re-frame todomvc method of how it handled state as much as possible
Which made for an interesting test for comp.el. Would have been a lot cleaner with just re-frame, abstracted away. As long as every input element you hang in the hiccup tree has a unique id, the framework should be able to handle state for you transparently
And then state management just gets defined by a use-state affect that gets mixed in to any input elements that need to be managed
So editing is a local prop, but I assume you're not supposed to be able to edit more than one todo at a time, right?
actually, it's not a prop, it's a local r/atom
todo-input voids that key from the :props later though, so it doesn't end up in your html props: :props/void :af-state
I'm trying to show how I might write it, but since editing a property that belongs to a list or an app, I probably wouldn't have it as a local prop.
And I'm not sure I got the api right with af.fect either, more-so just proposing that the idea in general might be useful
Do you ever modify existing props beside callbacks like on-save
and on-stop
?
I gotta go walk the dog, but I'll think about this some more. Maybe a little walk will help.
For this use case, it seems like there's the render function (eg. todo-input
) and then there are functions for modifying the input arg to the render function, (eg. existing-todo
).
The fns that modify the input don't need the render-fn and you can just leave it out:
(defn existing-todo
[{:keys [editing]
{:keys [id title]} :todo}]
{:af-state (r/atom title)
:on-save #(if (seq %)
(dispatch [:save id %])
(dispatch [:delete-todo id]))
:on-stop #(reset! editing false)}[m])
which could be used like:
(todo-input (-> {}
(existing-todo)
(other-modifier)))
It's not really clear if having a way to convey the render function alongside its modifiers is useful, but if it is, then you could just use a map:
{:render todo-input
:middleware [existing-todo
sparkly
etc]}
The idea here is to describe a UI component as data.In many ways, the code ends up looking similar, but for me, the framing of here's a map that describes a component is easier to learn and reason about than trying to frame it in terms of an "extensible function".
It also means you don't really have to learn anything new to make a slightly different component:
(assoc comp
:render special-todo-input)
Intuitively, I have a strong skepticism about "extensible function" as a concept. Functions are already extensible: function composition, multimethods, arity overloading, protocols, or accepting extensible data like maps. If you want to retain information that can be further manipulated, use a map (or other data).
I actually really like the approach from https://vimeo.com/861600197
That being said, I do think "invokable data" (which is maybe the same thing as the "extensible function" with a different framing) might be an interesting idea for other use cases. It's mostly that if you can use pure data and functions, it should be preferred. At least for me, it took quite some time to start to understand the af.fect API which has its own language/interface for manipulating these fns.
At least in its current form, it's a bit of a rabbit hole. existing-todo
derives from todo-input
which derives from comp/raw-input
which derives from el
, etc. If it was just a map, I feel like I can examine the end result and not really worry about how it was derived, but as an extensible function, I feel like I need to not only understand the algebra of extensible functions, but also understand existing-todo
's whole ancestry which is opaque in the current iteration.
Yeah, there's pros and cons to the pure data UI. I think it's a better tradeoff than HTMX. You can drive the whole thing from the backend and just ship the hiccup. You do end up making a lot of DSLs to wrap things that need to be functions on the frontend but once they're written it works. But you still end up with some of the impedance mismatch of htmx, for those situations where you genuinely need to pass a lambda. And when you go pure data on the front end, you'll still have these massive reduction/transformation steps where you dump the whole world in, all the DSLs get computed into functions, lots of magic happens that only a few people on the team understand, and out the other side magically pops out a new world made out of actual functions. And there'll often be 3 or 4 or those reduction steps, making it very hard to track where everything is going. I've worked on an app built completely out of pure data, with this chain of world transformations, and while I appreciated the beauty of the abstraction (and the ability to transparently migrate some parts between the front end and back end, etc) I'm not sure I'd want to have to support that kind of architecture again. Every time you want to add something dynamic to the system, you have to update so many things in so many places. Not so bad for a turbotax like app, where every page is similar, but with just different text and a half dozen types of form elements, sure pure data can express that domain easy enough. But if you have some general purpose dashboard that needs to change fast for a diverse audience then evolving that pure data app fast is going to be hard IMO.
Yeah, I like this invokable map idea, and maybe rebuilding the core from the ground up around that idiom. Might simplify it more
I did make a utility fn for trying to keep track of the rabbit hole where an affect came from. Regular functions are just as opaque though.
For the +sv
affect in the readme, the utility fn prints:
{:args (),
:finally [:base],
:was :user/+s,
:is :user/+sv,
:joins [:mock :void :base],
:affects [:mock-0 :with-0 :void-0 :base],
:op #object[cljs$core$_PLUS_],
:void [:with :mock],
:effects
[:user/+sv-0
:user/+s-0
:children-0
:base],
:mocks [[1 [2]] 3]}
So at least you can chase down everything it's made of, which might arguably be harder with just functions.
We could do more here to store all data for all affects being composed, so that we could print out the entire context maps for every ancestor, but the functions on some of those keys are still going to be opaque, unless you turn the whole outer world into a dsl that lives in your dataBut I mean, everything about this implementation is a prototype. I would never recommend using this in prod, where simply passing a map with :as
in it changes the mode of a function. That's destined to blow up somewhere. I'm deliberately keeping some aspects of the impl simple, so as to just show the concept.
A real implementation would probably involve protocols/deftype (or maybe defrecord) and have much better instrumentation for tracing back the composition of an affect. Also, interceptor chains might be a better abstraction for people to manage the ordering of affects. We should probably delay comping until the very end, allowing you to put a new fn between any two fns in the stack. I didn't take it that far, in terms of granularity, but a final solution should probably be able to get fine grained like that, perhaps via a lower-level api.
> a final solution should probably be able to get fine grained like that, perhaps via a lower-level api. My idea would be to have a data specification, not an API. Obviously, there would be helper functions that make the common case easy, but otherwise, it would be purely descriptive.
> Yeah, I like this invokable map idea, and maybe rebuilding the core from the ground up around that idiom. Might simplify it more I think the usage and implementation would end up looking pretty similar, but I think there's a huge leap in reuse if the way you read and create these things is just using normal data functions. I think it also aids in understanding.
> For the +sv affect in the readme, the utility fn prints: The goal is not need a specific utility function. You should be able to inspect the result like any other data or using familiar tools like portal and clerk.
What is the semantic for how downstream maps can affect upstream maps? Downstream maps should be able to shadow values of upstream maps, redefine them, wrap the ins and outs, delete them. Do you have an idea how that might look, using purely a description language, that is simple?
downstream maps don't affect upstream maps.
I know I keep harping on the PLOP related terms, but I really do think the perspective matters.
Well, my language sounds like I'm actually changing the upstream function lol it's confusing
let's say you have:
{:render todo-input
:middleware [existing-todo
sparkly
etc]}
you can create a new map that uses special-todo-input
instead of todo-input
like so:
(assoc comp
:render special-todo-input)
mostly psuedo code, but you could mark your sparkly todo input dull with something like:
(def existing-todo :existing-todo)
(def sparkly :sparkly)
(def dull :dull)
(def etc :etc)
(def todo-input :todo-input)
(def comp
{:render todo-input
:middleware [existing-todo
sparkly
etc]})
(require '[clojure.walk])
(clojure.walk/postwalk-replace {sparkly dull}
comp)
;; {:render :todo-input,
;; :middleware [:existing-todo :dull :etc]}
And so most of the design is specifying the semantics of the attributes like :render
:middleware
, etc.
For convenience, there will probably be helpers for common transformations and initializers.
As well as helpers for inspection and validation.
Some of the motivation for this approach is also from The Design of Everyday Things. It talks about how it's easier to reason wide, flat decision trees or narrow, long decision trees. This is trying to turn the problem into a wide, flat decision tree since all the matters is the resulting data structure. What you don't want is a medium width, medium depth decision tree, which I think is where the mutable inheritance model ends up.
Yeah, as long as downstream users are able to get to the original data of any impl map it its ancestry, you can use your regular data manipulation fns to update any function way up the chain, however you want. My existing impl just keeps that data around for you as a hidden value, but if it's more like callable maps that could be simplified
I think one other subtle difference is that ancestry is this ordered thing and it matters where things came from. With the map based approach, the "history" doesn't matter. It's not a map that derives from another map. It's just a map with X, Y, Z transformations. It doesn't matter how they got there.
I think you want to be able to reuse some of the decisions made.. Perhaps your parent decided to delete one of the keys of your grandparent?
You can always reintroduce that key, but we want to reuse the parent's decision when possible
I'm saying you should absolutely avoid caring about how the map was produced. You should only care if the result has or doesn't have an attribute.
At least for me, it's taken a very long time to internalize the philosophy of "just use maps" and I still get it wrong sometimes.
I guess it doesn't matter that a particular part of the shape of the current map came from the parent or the grandparent... From an organizational perspective, some might like to update attrs in a way that is associated with the map/affect it came from, but I suppose that's just projected organization and not strictly necessary. The history of composition can be traced in code, like everything else
Well, some of these affects that are being composed together, between +, +s and +sv, for instance - the order of how those transformations over arguments are applied matters
And, we should also be able to stick an effect between + and +s, in the map for +sv, not just before or after both of them
right, you might have an ordered sequence of middleware as part of your specification.
I'd recommend using single inheritance as much as possible, bringing in mixins horizontally only when necessary
I'm not sure what you mean by inheritance, but I don't think you want it.
just create a new map with the attributes you want based on "merging" the "parent" with any new attributes.
Right, I'm thinking you were thinking the middleware vector would contain these maps that get merged in, but their ordering is used to determine how any functions are ordered that need to line up
yea. it seems like you need some way to run a series of transformations on the input.
since you don't have the input until later.
for other attributes, you have all the info you need and can just use a new value for the attribute or remove the attribute as needed.
Okay, there's still questions I have about this route, but I think we're getting pretty far into the weeds where it'd be easier to talk about if we just had an implementation. I'm going to ruminate on an invokable map impl. I'd like a prototype impl to be as similar as possible across clj and cljs, so I'll think about it.
because some of these keys, the work they do is not on the inputs but on the environment itself (the currently merged history of maps, depending where we are in that chain). That's all some affects do, update the map for you so you don't have to
But yeah, some of this stuff would shake out better in an invokable map impl, where a lot of the affect composition can just be done with fns we use on maps
I don't think you need a key that edits the map. You should just be able to edit the map
right, you could define that fn and apply it from the outside, as opposed to it being a trait within the map
We'll see, I gotta flip it - hang the fn off the data instead of the data in the fn, then see if some of your suggestions can simplify it further
š¤ . hopefully, I didn't encourage you down a more complicated path!
Cool, I'll let you know what I come up with. Thanks for placing your seasoned eyes on this problem space, really appreciate your intuition here
So records actually work pretty well for this:
(ns af.fect2
(:require [clojure.pprint :as pp]))
(defn run-af [env]
(let [afs (:af env [])
op (:op env (fn [& args] args))
new-env (->> afs (reduce (fn [arg af] (af arg)) env))]
(fn [& args]
(let [ins (:in new-env [])
new-args (->> ins (reduce (fn [arg in] (apply in arg)) args))
res (apply op new-args)
outs (:out new-env [])
out-res (->> outs (reduce (fn [arg out] (out arg)) res))
fin-env (assoc new-env :res out-res)
fins (:finally fin-env [])]
(->> fins (reduce (fn [arg fin] (fin arg)) env))
out-res))))
(defmacro daf [afname ctx]
`(do (defrecord ~(symbol (str ">" afname)) []
clojure.lang.IFn
~@(->> (range 22)
(map (fn [n]
(let [args (for [i (range n)] (symbol (str "arg" i)))]
(if (empty? args)
`(~'invoke [this#]
((run-af this#)))
`(~'invoke [this# ~@args]
((run-af this#) ~@args)))))))
(~'applyTo [this# args#]
(apply (run-af this#) args#)))
(def ~afname
(merge (~(symbol (str "->>" afname)))
~ctx))))
(daf affect
{:id :affect
:af []
:in []
:op (fn [& args] args)
:out []
:finally []}) ;=> #'af.fect2/affect
affect ;=> #af.fect2.>affect{:id :affect, :af [], :in [], :op #function[af.fect2/fn--7889], :out [], :finally []}
(pp/pprint affect)
; {:id :affect,
; :af [],
; :in [],
; :op #function[af.fect2/fn--7889],
; :out [],
; :finally []}
(def a+ (merge affect {:id :+ :op +})) ;=> #'af.fect2/a+
(pp/pprint a+)
; {:id :+,
; :af [],
; :in [],
; :op #function[clojure.core/+],
; :out [],
; :finally []}
(apply a+ 1 (range 30)) ;=> 436
(a+ 1 2 3 4 5) ;=> 15
(def a_inc_+_dec
(-> a+
(assoc :id :inc-+-dec)
(update :in conj (fn [& args] (mapv inc args)))
(update :out conj #(dec %)))) ;=> #'af.fect2/a_inc_+_dec
;; (update :finally conj (fn [res] (println :done! res)))))
(pp/pprint a_inc_+_dec)
; {:id :inc-+-dec,
; :af [],
; :in [#function[af.fect2/fn--7901]],
; :op #function[clojure.core/+],
; :out [#function[af.fect2/fn--7903]],
; :finally []}
(a_inc_+_dec 1 2) ;=> 4
(def more-stuff (assoc a_inc_+_dec :more :stuff))
(pp/pprint more-stuff)
; {:id :inc-+-dec,
; :af [],
; :in [#function[af.fect2/fn--7901]],
; :op #function[clojure.core/+],
; :out [#function[af.fect2/fn--7903]],
; :finally [],
; :more :stuff}
(more-stuff 1 2) ;=> 4
I'm going to see if it scales with comp.elyou can probably use map->MyRecord
directly instead of ->MyRecord
+ merge
I forgot, what's the difference between kvs added after the record is made? Vs defined in the vector in its definition? The original ones have faster lookup or something?
I don't remember the performance differences, but the I think the record will always contain the key if it's included in the definition
(get my-record :defined-key :not-found) ;; nil
and I don't remember the exact behavior, but dissoc
is weird for defined keys. It either converts it to a map or sets the key to nil. I can't remember which.
For another similar lib, I've used defrecord without specifying any keys in the definition.
I might add those half dozen defaults if there's a perf benefit or whatever. But yeah, can't get rid of them I think
Posting this back to the channel. With the help of @smith.adriane, we've winnowed it down into more data oriented approach:
(defn run-af [env & args]
(let [afs (:af env [])
op (:op env (fn [& op-args]
(case (count op-args)
0 nil
1 (first op-args)
op-args)))
new-env (->> afs (reduce (fn [arg af] (af arg)) env))
ins (:in new-env [])
new-args (->> ins (reduce (fn [arg in] (apply in arg)) args))
res (apply op new-args)
outs (:out new-env [])
out-res (->> outs (reduce (fn [arg out] (out arg)) res))
fin-env (assoc new-env :res out-res)
fins (:finally fin-env [])]
(->> fins (reduce (fn [arg fin] (fin arg)) fin-env))
out-res))
(defmacro daf [afname ctx]
`(do (defrecord ~(symbol (str ">" afname)) []
clojure.lang.IFn
~@(->> (range 22)
(map (fn [n]
(let [args (for [i (range n)] (symbol (str "arg" i)))]
(if (empty? args)
`(~'invoke [this#]
(run-af this#))
`(~'invoke [this# ~@args]
(run-af this# ~@args)))))))
(~'applyTo [this# args#]
(apply run-af this# args#)))
(def ~afname
(~(symbol (str "map->>" afname)) ~ctx))))
(daf add {:op +})
(def add-and-inc
(-> add
(update :out conj inc)))
(add-and-inc 2 2) ;=> 5
I didn't follow the complete discussion but I find the concepts really interesting. It reminds me of Aspect oriented Programming for FP.
Thanks, yeah, I can see that. Difference here I think is that we're not "cross cutting" as much as cutting down the center of our functional pipeline. You could add orthogonal concerns, like for logging or something, but you can also patch in function behaviors without having to change existing code. If our pipelines are up and down and "cross cuts" are horizontal, this is more like a vertical version of AOP I think
It also doesn't need a whole program preprocessor for "weaving" in behaviors, since we're keeping our functions as data that can be manipulated at runtime
Much of AOP is also possible with interceptors like adding behaviour in pedestal. In Java applications, without using AspectJ, you could resort to e.g. ServletFilters as interceptors, which is basically the same mechanism as in pedestal. Interceptors are a bit limited, of course becouse it's just one join point that you can instrument. As far as I understand your design, it's more flexible than that.
Yea, I would also compare this approach to AOP.
For the design, I think the most important part is thinking about which attributes to support and what their semantics should be.
Yeah, and that's still up in the air. I'm just spitballing what might be good semantics but I've been trying lots of different ones, even in these impls. My hope is to provide just the minimal thing that allows for others to build anything they could want on top of it.
I don't know if I'd say my design is more flexible than interceptors. Perhaps in the sense of being simpler. But I do like how the enter and leave of interceptors give an interceptor author the ability to update both the upstream and downstream of a given interceptor. I think people are going to need easy ways to manipulate these chains even after they're defined. So I'm thinking about a version where :in, :op and :out are interceptor chains. Or a version where there's just one interceptor chain, with the :op at the end, and :enter constitutes the :in and :leave constitutes an :out.
In the above example, you could just update the order of any of those vectors because they're being stored as data, so you can use any data slicing methods your prefer.
Simpler and more general, but could get messier, whereas interceptors might bring more sanity
> I think people are going to need easy ways to manipulate these chains even after they're defined. That's actually one of the things I specifically don't like about interceptors. You're essentially creating a mini virtual machine.
It's the best try at solving that kind of problem I know of - changing dispatch semantics in a pipeline over time
Yeah, interceptors are a later possible feature I think. You can do everything you want with access to that vector
For designing the semantics, I think it would be helpful to have a rationale or problem statement written.
"Tired of changing your code every time a new feature is requested? Use ThisThing and add changes that only affect callers that need the new feature, without breaking existing callers (also reduces defensive code duplication). ThisThing does this by giving you the ability to update behaviors at various points in the lifecycle of a function, even after it has been defined, by turning functions "inside out" and treating its parts as data."
That's more like a sales pitch. I was thinking more like https://youtu.be/fTtnx1AAJ-c?si=vmaLXEP70WYYzxPK&t=1899
In some ways, it allows you to treat functions like macros, because by turning functions into data, functions can act on them in a similar way to macros, allowing you to sneak into the scope of a fn, behind its closure wall, and to tweak that data before it executes. So in that way, it's so general that it's hard to choose just one "problem statement." Like macros, there's lots of reasons for them and what different kinds of problems they solve.
So in that way, it's so general that it's hard to choose just one "problem statement." One way to deal with that is to make the problem smaller. One way to make the problem smaller is to specialize it (ie. reduce the scope to a smaller problem). At some later point, if your solution addresses the smaller problem, you can later try to generalize the approach. Generalizing is easier because you've already learned more about the problem and solution spaces while working on the smaller problem. (or if the approach didn't work on the smaller problem, you've saved a bunch of time and effort).
For example, I think limiting the scope to just focus on describing UI components might be useful, which was the original goal in the first place.
Yeah true. I'm going to tackle rewriting comp.el in this new formalism soon. Maybe tonight
I still think it would be helpful to try and write down some sort of problem statement for that purpose as well. Writing things down is unreasonably effective in my experience.
I wrote a design series for one of my libraries thinking it would help other people, but I'm pretty sure the biggest improvement was in my own thinking. I found so many design issues and jargon issues in my project by just writing things down.
I'm not sure these are well formed, but they're public: ā¢ https://blog.phronemophobic.com/what-is-a-user-interface.html ā¢ https://phronmophobic.github.io/membrane/membrane-topics.html
That Design In Practice youtube link is a much better resource.
That post is from 3 years ago! š“
https://www.amazon.com/Aspect-Oriented-Software-Development-Use-Cases/dp/0321268881
I think you have to get a better grasp of what these things are, their tradeoffs, etc, before you can really pick a good name.
I'm with @smith.adriane regarding the relevance of the name. Once it is used it will stick and to be used it has to be good and transport a meaning of the concepts behind it. That's with e.g. FP, OOP, AOP.
As far as I understand it, your data driven approach to function definitions/implementations is providing extension points to change the behaviour (effect) of the function when called, depending on the data provided with the map. And because it is just a map, it can be manipulated.
To define and transport the meaning, maybe you should answer a few questions, like: What are the key differences/advantages of this approach compared to function redefinitions (e.g. memoize), which also can add an extension effect to the implementation? What's the difference of this approach with providing strategies via function parameters (e.g. with partial)?
IMHO your approach is more dynamic, because you're treating the extensions and implementations as data which can be changed and composed at runtime with the tools of data manipulation.
What could you do with this approach? Some classic use cases for AOP are instrumentation (e.g. tracing, change detection, validation). More interesting would be some examples of how to extend the functionality of an application without changing the existing code. An example given in "AOSD with Use Cases", which I referenced earlier, is to extend a hotel reservation system with a waiting list feature as an extension, without changing the code for the reservation functionality.
What about "open functions," so as to contrast with functions usually "closing" over their implementations
I don't know. If y'all think "affect" is too tacky or off the mark, I personally think "data functions" or "open functions" sorta conveys the meaning of the idea. I can't think of many other analogies. I think open relates to partial, in the sense that a partial is partially open and partially closed
Like I said earlier, I think you have to have a better understanding of it is before you can choose a good name.
I think your idea of decoupling impl and data type instance is different than this one and is more-like a superset of data/open function functionality
I don't know. My approach would be to do more design work: ā¢ find and read prior art ā¢ create a table that differentiates it from different approaches to similar problems
Even just organizing the discussion so far into summarized form would probably help.
Yeah but see I'll go through all this work, writing up a design doc that explains how this thing relates to some word in the dictionary, and then y'all'll be like, "No, we hate that name too!" š
So Google's Gemini thinks we should either call them transformers
or coin a new term for it called fluxors
š
But I mean, you literally are transforming one function into another function via a series of data transformations
Okay, how about we call these things transformers and my lib can be called Deft because it provides a deft
macro for creating a root transformer. And if anyone wants to make a different transformers library they can call it something else, but we can all call them transformers. Yeah?
I want to only get a reference prototype built with deft, for the purposes of sussing out good semantics for the function composition stuff. But if it's a good idea I'd hope others would make better, different or more performant implementations.
So it seems to check out with mathematical language: "In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure of the categories involved. Hence, a natural transformation can be considered to be a 'morphism of functors'."
Very cool. One thing to watch out for is using a word with precise meaning, but using it incorrectly. My category theory is too weak to tell if that's the case here.
But yea, the best case is if you do find an existing idea that you can build on top of.
then people can connect their prior knowledge to your ideas.
Oh wow, so it's already sorta a thing:
"A monad transformer makes a new monad out of an existing monad, such that computations of the old monad may be embedded in the new one. To construct a monad with a desired set of features, one typically starts with a base monad, such as Identity
, []
or IO
, and applies a sequence of monad transformers."
https://hackage.haskell.org/package/transformers-0.6.1.1/docs/Control-Monad-Trans-Class.html
I even had a variadic identity function as the base :op fn in my impl. Very similar in spirit.
I'll study up on that. I don't understand monads very well yet but I should be able to figure it out. I'd also be interested in hearing from Clojurists what API they'd want from function transformer in Clojure and what features from monad transformers Haskeller Clojurist's would want to bring over to Clojure-land.
And what does lift
mean in the Control.Monad.Trans.Class docs and in Haskell in general?
I'm also trying to think through what a transducer transformation api might look like :thinking_face:
I'm working on docs right now and coming up with examples. I've got a dispatch pattern similar to multi methods but only for ancestors. Then a stateful version to show how you could use state to implement full blown multi methods. I have the mocking examples. The todomvc examples. I'd like a few more use cases to bang on the api before releasing an alpha of Deft
haskell transformer docs seem to have some interesting use cases to draw inspiration from
A library called https://github.com/jacekschae/conduit seems to have a simplified api for building transducers. Might be able to draw inspiration there for a simple, declarative, data-driven description of transducers, that can easily be morphed/transformed into other transducers
And I wonder how many idea's from Sussman's "Layering" talk could be implemented in Clojure transformers: https://www.youtube.com/watch?v=EbzQg7R2pYU
Yeah, def sympathizing with the transformer stack wrangling here https://youtu.be/8t8fjkISjus?si=KXr7gztAgUtoNJWM
Interesting take on (pdf)https://drops.dagstuhl.de/storage/01oasics/oasics-vol076-plateau2019/OASIcs.PLATEAU.2019.3/OASIcs.PLATEAU.2019.3.pdf for ocaml I believe, where they say: > Speaking from the personal experience of implementing thousands of line of program transformations, it is difficult to maintain a declarative, consistent, and reusable pattern of implementation that scales well for even a few dozen transformations. To resolve these issues, we propose a domain-specific language for program transformations that can operate on three different levels of abstraction: the concrete syntax tree, the abstract syntax tree, and the generalized syntax tree. The concrete syntax tree and abstract syntax tree are familiar: the former including rigid details such as the exact whitespace of the code to be transformed and the latter including only the underlying structure that is fed to, for example, the evaluator of the language. The generalized syntax tree operates at an even higher level than the abstract syntax tree, and allows for an even more declarative approach to specifying program transformations. So the tool they're talking about still operates at compile time, patching things transparently like AOP. But similar to what they're saying there, I think of these function transformers as a higher level, but still intermediate, representation of a function, above the token level and AST level. I believe that's how Elixer's macros work - via the AST. But if you're just using the higher level, intermediate model for the function, it can be a kind of macro that can be defined at runtime and operate on functions made of data at runtime. It appears that's how Haskell's transformers lib works, updating monads as runtime data.
In the Haskell Transformers sense, lisp macros are arguably transformers at the token level. These "function transformers" work at a higher level semantic abstraction for functions. As a result, you can manipulate them at runtime, even without compile/eval or access to vars in static environments like cljs or babashka
cool that there's some input and prior art to draw from. It also validates your ideas in a way. For compile time AST transformations we have macros, of course. But if I understand your approach correctly, it is a form of runtime transformation which should be more dynamic. Maybe parts of the API could use macros, but in general, a data driven API is preferable, because it's more dynamic and composable.
Yeah, making "parts of functions as data" basically gives plain old functions macro power over those "functions as data"
So hopefully that simpler implementation makes it easier to assess the idea. What do y'all think? Good idea? Bad idea? Pros and cons?
Posting this back to the channel. With the help of @smith.adriane, we've winnowed it down into more data oriented approach:
(defn run-af [env & args]
(let [afs (:af env [])
op (:op env (fn [& op-args]
(case (count op-args)
0 nil
1 (first op-args)
op-args)))
new-env (->> afs (reduce (fn [arg af] (af arg)) env))
ins (:in new-env [])
new-args (->> ins (reduce (fn [arg in] (apply in arg)) args))
res (apply op new-args)
outs (:out new-env [])
out-res (->> outs (reduce (fn [arg out] (out arg)) res))
fin-env (assoc new-env :res out-res)
fins (:finally fin-env [])]
(->> fins (reduce (fn [arg fin] (fin arg)) fin-env))
out-res))
(defmacro daf [afname ctx]
`(do (defrecord ~(symbol (str ">" afname)) []
clojure.lang.IFn
~@(->> (range 22)
(map (fn [n]
(let [args (for [i (range n)] (symbol (str "arg" i)))]
(if (empty? args)
`(~'invoke [this#]
(run-af this#))
`(~'invoke [this# ~@args]
(run-af this# ~@args)))))))
(~'applyTo [this# args#]
(apply run-af this# args#)))
(def ~afname
(~(symbol (str "map->>" afname)) ~ctx))))
(daf add {:op +})
(def add-and-inc
(-> add
(update :out conj inc)))
(add-and-inc 2 2) ;=> 5