Fork me on GitHub

just a shout-out that i'm biting the bullet and migrating our project with 4k+ LoC full of specs to malli and i'm totally loving it -- the fact that it's just plain symbols + data makes thing so much easier to reason about. thanks a lot for this monumental effort of making malli work!

🎉 9

just a sanity check, but it seems to me that using "plain data" for all the schemas (i.e. not wrapping them in m/schema), and then at a later point use a compile-time m/validator and m/explainer is a decent approach, right ? i've just whipped up this macro to perform assertions, which allocates the explainer at compile time, but does the actual validations at runtime

(defmacro assert
  [s v]
  (when *assert*
    (let [explainer# `(m/explainer ~s)]
      `(when-let [ed# (~explainer# ~v)]
         (throw (ex-info "Validation failed"
                         (merge {::v ~v}
                                (me/humanize ed#))))))))


Hey, @lmergen thanks for this snippet. I wonder if this can be part of the default malli distribution @U055NJ5CC


I think that m/assert would be good, but it should work in all situations, so I believe the schema creation should happen at runtime.

yes 1

@lmergen I'm curious about your experience report. There is a thread on Reddit about exactly this. Feel free to comment there if you want.


@lmergen that's exactly what I do. Most (all?) of the malli public API will coerce to a compiled schema, which lets you keep your schema definitions nice and focused. Then compile-time m/validator and m/explainer. Also there are the short hands of m/explain (and m/validate) which create a temporary explainer for you. This isn't performant, but, for use-cases like your macro, it saves you having to create it yourself.


I suppose a slight difference is that your macro will create the explainer at compile time, so up to you if you want that difference!


@borkdude i started that thread 🙂


but yes so far so good -- of course there's a "second system" benefit here, because i have a much better understanding of the model and taking the opportunity to refactor a few things. one of the big changes is that rather than one humongous specs.cljc, i'm now creating many more namespaces, and each of the malli schemas live within those namespaces. the reason for this is that spec works with (namespaced) keywords, so you can get away with putting them in a single namespace, where malli seems to map more natural if you split things up in different namespaces.

👍 3

@lmergen that was exactly my experience as well! One benefit that you get is if you make use of def or defn to return those data schemas, you get the added benefit of having the compiler help check that those things still exist. In spec you can delete / mistype a keyword and get bit, but here, you can get the compiler (or clj-kondo) to help you