This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-07-13
Channels
- # announcements (1)
- # babashka (29)
- # beginners (64)
- # calva (4)
- # cider (5)
- # cljs-dev (3)
- # cljsrn (2)
- # clojure (100)
- # clojure-australia (2)
- # clojure-conj (7)
- # clojure-dev (9)
- # clojure-europe (31)
- # clojure-germany (1)
- # clojure-nl (2)
- # clojure-uk (13)
- # clojured (2)
- # clojurescript (62)
- # community-development (2)
- # conjure (1)
- # cursive (21)
- # datomic (39)
- # events (2)
- # fulcro (7)
- # graalvm (24)
- # graalvm-mobile (11)
- # holy-lambda (3)
- # jobs (7)
- # lsp (15)
- # malli (26)
- # music (1)
- # nyc (2)
- # off-topic (18)
- # reagent (23)
- # reitit (5)
- # remote-jobs (1)
- # shadow-cljs (2)
- # tools-deps (26)
- # vim (6)
- # xtdb (17)
Is it possible to define a default value for a map entry in terms of other map entry, e.g.
there are two keys in a map: primary
and secondary
. Secondary if missing is set based on primary one.
Schema expressing it could look like this:
[:map
[:primary string?]
[:secondary {:default-fn '(fn [m] (:primary m))} string?]]
out-of-the-box no, but you can implement this is user space, using a custom transformer. I recommend reading the source code of mt/default-value-transformer.
My guess is that you should register the transformer for the map-level, peek the entrys and given there is :default-fn
, call it with the map value. Should be streightforward. Could be a nice example into docs/tips.md
...
I had a look today in the default value transformer, and based on what you wrote I've made an example of a default value exercised by given function instead of default value (in the current impl). I modified the default-value-transformer
to do that. But at the end I realised that I could handle this case:
[:map
[:primary string?]
[:secondary {:default-fn '(fn [m] (:primary m))} string?]]
but I wouldn't be able to handle this one:
[:map
[:primary string?]
[:nested
[:map
[:secondary {:default-fn '(fn [m] (:primary m))} string?]]
because the given fn receives only current map/submap not the root map? How would you tackle that kind of transformation?simplest would be just to add a normal transformer into the top-map, which would use normal clojure to transform the whole nested map.
you could make it declarative thou, but it's not simple:
1. add the transformer to the top-level
2. add the declarations to any children
3. use the :compile
hook to 1 to get access to the full nested schema
4. within it, walk the schema and collect all the declarations and the paths, create a function to transform the child values
5. on transformation, the "compiled" transformation is applied at the top, knows already what to do, is fast
> simplest would be just to add a normal transformer into the top-map, which would use normal clojure to transform the whole nested map.
you mean, without Malli? I know, that would be the simplest, but I'm trying to understand malli transformers thoroughly and this problem looks like a good one to understand how Malli transformers works.
> actually, it is simple) 😉
I believe you it is simple, but at my stage of understanding malli transformers I'm reading each your response 10 times and it is not enough 🙂
I mean i understand briefly some of the bits already, based on the code and docs, but it it still not enough to understand some of the terminology you use.
TBH the first three steps are not really clear to me.
What do you mean by
"add the transformer to the top-level"
"add the declarations to any children"
"use the `:compile` hook to 1 to get access to the full nested schema" - I briefly know how the compilation works (preprocessing of the schema and passing discovered data within closured fn into interceptor), but I don't understand the phrase "hook to 1"
---
Also, within (transformer)
I noticed that interceptors are created from transformers
(from :decoders
or :encoders
) and default
(from :default-decoder
or :default-encoder
). default-value-transformer
uses both.
I mean, I look at the code and I don't understand what is it about. Looks like something obvious once you know it. Can you tell what is it about?
I got back to the topic, trying to find all answers to my question.
@U055NJ5CC would you be so kind to read it and correct or complement if something is missing in the notes below. If the notes are correct, I could polish them a bit, maybe add examples, and add it to the Malli docs if you wish.
There is a hierarchy of hooks that Malli handle when transforming values (in this order of priority):
• from schema properties: :decode/<name>
e.g. :decode/math
- provides a transforming fn for a schema enclosing it (to enable, transformer specifying the given name need to be applied e.g. (mt/transformer {:name :math})
• from schema type properties: :decode/<name>
e.g. :decode/math
- provides a transforming fn for a schema type enclosing it (to enable same as above)
• from transformer definition: :decoders
, :encoders
- provides a map of (schema -> transforming fn), so for the every such schema in the given value's tree, applies the given fn
• from transformer definition: :default-decoder
, :default-encoder
- provides a last resort transforming fn, so if none of the above transforming fn hooks do not apply, it takes this one. This fn just takes the whole schema, so it might be called a top level hook.
Moreover:
• :enter
/`:leave` refers to the interceptor stages, so we can provide separate transforming fn, they hold directly final functor modifying the the value. The map holding :enter/:leave can be provided by either compiling function or directly.
• compiling is the technique exercising values closure. To use it, instead of transforming function, a map {:compile <compiling-fn>} needs to be given. Compiling fun it is a fn taking schema and schema options, and returning a transforming fn. That way, processing of schema specifics can be done once at the time of constructing transformer.
WIP: function schema guide. Wanted to push all aspects into single page, so that it’s easier to read what is possible and how the things stack up. Is that any good? something missing feature-wise? have time to polish things this week, comments most welcome: https://github.com/metosin/malli/blob/08111040566e2d39ecff62dfccdcb834f7a23140/docs/function-schemas.md
PR is here for comments & fixes: https://github.com/metosin/malli/pull/471
So actually orchestring project could be done in terms of m/=>
, (mi/start!)
and (mi/stop!)
. Am I correct?
Just a note about orchestrating projects using Malli schemas. I wanted to setup orchestration using malli in a way similar I used to do it with Orchestra. I checked out https://github.com/teknql/aave, https://github.com/setzer22/malli-instrument and https://github.com/CrypticButter/snoop. Snoop: I've found it the best one for that purpose. Works fine with functions evaluated in REPL, readable error messages. A bit odd way of enabling it (by using JVM flag), on the other hand once set in dev and test aliases, it just works, for REPL and tests. Also 'https://github.com/CrypticButter/snoop#inside-the-prepost-map' syntax for adding function schemas is great. malli-instrument: similar syntax to spec (call instrument-all to instrument), but there are problems with orchestration when you modify and evaluate continuously schemas and functions within REPL. aave: I have had several looks at this library, and its README and TBH I have no idea how to use it. I guess I'm missing some relevant piece of information regarding instrumentation with Malli.
I also find snoop very handy. To keep the schema inline with the arg name is a huge win
(>defn xy*2
[(x :int) (y :int)]
[=> :int]
(* 2 x y))
At first it was weird to me that Snoop supports so many options for providing inline schemas, but at the end I used 3 variants and at the 3rd time I found my flavour (just plain [:input :output]
). So I'm kinda glad that Snoop doesn't enforce to use one particular syntax 🙂
I'm surprised that not that many people use orchestration. Seriously, it is one of the killer features of Clojure. And with Malli and its data-driven approach, it is so pleasant and concise to write and modify specs/schemas.
I'm aspiring to deliver to market an accounting software and when refactoring some of the code that does calculations or refactoring, the code orchestration is such a helper, always watching and keeping that invariants are valid.
@U023TQF5FM3 By "orchestration" do you mean https://github.com/jeaye/orchestra ?
> at the 3rd time I found my flavour (just plain `[:input :output]`). Which flavor are you talking about?
I prefer providing schemas in a two elements vector following the params vector.
(>defn add [x y]
[[:cat int? int?] int?]
...)
I found it the nicest to read 🙂
Here is an example:
https://github.com/CrypticButter/snoop#more-convenient-notations-that-work-when-using-defn> By "orchestration" do you mean https://github.com/jeaye/orchestra ? I use the "orchestration" term for instrumentation of selected methods in a project, so you provide specs/preds/invariants and the tool using instrumentation watches the instrumented functions, either using Orchestra & spec or Snoop & Malli, within your REPL and tests. I think the distinction between these two, is that instrumentation is a technique, while orchestration is a practice that leverages instrumentation.
@grzegorzrynkowski_clo did you check the guide I just posted? There will be a malli.instrument
ns, with common utilities for spec/orchestra-style usage. It's all built on existing function (var) registry, so no new macros/defn-syntax. It would be great if all defn-wrapping libs could use malli.instrument internally, would keep thing more coherent. Not sure if that is possible thou. All existing libs (aave, Snoop, malli.instrument) are great, just that the core instrumentation belong to the core. Better defn wrappers more welcomed outside.
I've seen it, just was looking for some ready to go solution right now. When malli.instrument
ns comes, most probably I will reevaluate my setup 🙂
Is there a function to validate schemas' data structures? I was playing with Malli and transformers and I was trying to figure out what is wrong with my custom schema. I tired to coerce data using string-transformer and it didn't work. It turned out the schema of my schema was wrong. Here is an example showing my problem:
(def Order
[:map
[:qty number?]
[:price string? float?]])
(comment
(m/validate Order {:qty "1" :price "1.1"}) ; => false
(m/validate Order {:qty 1 :price "1.1"}) ; => true
(m/decode Order {:qty "1" :price "1.1"} mt/string-transformer) ; => {:qty 1.0, :price "1.1"}, but I expected {:qty 1.0, :price 1.1}
(->> (m/decode Order {:qty "1" :price "1.1"} mt/string-transformer)
(m/validate Order))) ; => true
Schema [:price string? float?]
is wrong. Unintentionally I forgot to remove string?
. Still it is fairly small example, but within complex examples might be difficult to spot it. Would be nice to have a method checking schemas data structures. Is there something like that?I had a look today in the default value transformer, and based on what you wrote I've made an example of a default value exercised by given function instead of default value (in the current impl). I modified the default-value-transformer
to do that. But at the end I realised that I could handle this case:
[:map
[:primary string?]
[:secondary {:default-fn '(fn [m] (:primary m))} string?]]
but I wouldn't be able to handle this one:
[:map
[:primary string?]
[:nested
[:map
[:secondary {:default-fn '(fn [m] (:primary m))} string?]]
because the given fn receives only current map/submap not the root map? How would you tackle that kind of transformation?