This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-11-04
Channels
- # announcements (30)
- # aws (7)
- # babashka (7)
- # beginners (64)
- # calva (39)
- # cherry (17)
- # cider (1)
- # clj-on-windows (6)
- # clojure (30)
- # clojure-austin (12)
- # clojure-europe (25)
- # clojure-nl (2)
- # clojure-norway (23)
- # clojure-spec (23)
- # clojure-uk (6)
- # clojurescript (20)
- # cursive (18)
- # datahike (3)
- # datalevin (12)
- # datomic (9)
- # etaoin (5)
- # graalvm (45)
- # instaparse (2)
- # interceptors (11)
- # kaocha (1)
- # lsp (102)
- # meander (6)
- # nbb (16)
- # off-topic (30)
- # pathom (83)
- # pedestal (6)
- # portal (5)
- # re-frame (12)
- # reitit (5)
- # rewrite-clj (10)
- # scittle (35)
- # shadow-cljs (49)
- # spacemacs (10)
- # vim (14)
I have read various comments/advice on where spec fits into the workflows of producing/consuming data, but I’m still not exactly clear on when and where “parsing” / “coercion” should happen vs “validation” / “conforming”, and spec’s ability to parse or “destructure” things (like a binary format for example) seems to blur the lines a little bit.
I’m also confused where tools like https://github.com/exoscale/coax come in
Example:
I request/receive some data payload and convert it into an EDN structure more or less matching the shape of the raw payload (e.g. json deserialization)
I only care about a subset of the fields in the payload, and I need to:
• ensure the right fields are present
• pull them out of the source data, parse them, and put them into my application’s representation
Which parts is spec appropriate for? Would it make sense to use multiple specs to check both the raw data and my application representation?
Specifically regarding parsing, should a spec pull apart
• string value containing delimited values like v1/v2/v3
• date string mm/dd/yyyy
or should those use normal string/date handling functions to parse them, and leave the spec as a simple regex to ensure the format looks correct (I suspect the answer is “it depends”, but I’m not sure where to draw the line)
I think I have misunderstood the sequence specs and thought I could apply them to strings.
However, the v1/v2/v3
really is a sequence but in string format, which again brings up the question of when the parsing should happen.
It feels like the spec validation is not very useful before parsing because parsing implicitly includes validation I’m leaning towards: • parse input data into internal structure, parsing/converting fields as needed • write spec against the internal structure. validate transformed data before passing it among internal functions I still don’t see the use case for the coercion libs
After reading https://groups.google.com/g/clojure/c/Tdb3ksDeVnU/m/ryc28SteAgAJ and I consider how the fully qualified keywords work in spec, it’s a bit clearer, but I still don’t have a fully-formed idea of how to move between two data types with the help of spec/coax
Coax is mostly used to convert from json to edn for us, either from http requests or inside db fields. In our case we cannot serialize as edn (or transit/fressian&co) because other languages have to be able to use this data as well. It does no validation tho, it will coerce if it can/has-to otherwise it leaves values as is.
Iirc it uses almost nothing from spec internally, and that’s intentional. It’s strictly a transformation pass infered from a spec form/coax-registry
A common (imho) mistake I often see in the wild is trying to tie your specs with coercion, using s/and and conformers. That turns things into franken-specs quickly, makes conforming all kind of broken when you need it and basically close the doors to have multiple coercion strategies for a spec depending on usage context
Some other validation engines/libs have different strategies. But that has to be thought beforehand, allowing to compose operations when values are “visited”.
Personally I quite like the spec+coax combo, I might be biased (I wrote coax), but we use it heavily at work and it’s been quite frictionless so far
@U050SC7SV so a coax pass will run and transform some fields to match the desired spec before the data is ever used?
I’m dealing with multiple third-party APIs which makes this more interesting because the data format and structure is not easily usable without large transformations I currently do mostly “manual” parsing, but Im trying to simplify it and add spec into the equation where appropriate
@U050SC7SV do you use “reverse” coercions to go the opposite direction, i.e. get data back into in the format necessary to interact with the other system
I guess it’s really just another spec with a different qualifier e.g. :application/field
might go from string to int and :other-system/field
might go from int to string
Yes in one particular case. It’s one of the good things with coax. Coercion rules are first class
@U08FV2128 I have the same question, so rather than providing an outright answer, I'll report on what I'm currently trying. I validate user input from the HTTP API with spec A, coerce it to my app-internal representation manually (will have a look at coax), and validate the app-internal data in the rest of my app with spec B.
So, e.g., JSON input field :quantity
is validated with (s/def :order.input/quantity (s/and string? #(re-matches #"^\d+$" %)))
, and coerced into field :order/quantity
, validated with (s/def :order/quantity nat-int?)
.
This models the two "sides" of the HTTP API interface with separate specs:
User ←[spec A]→ API ←[spec B]→ Rest of the system
I have no idea if this is The Right Way, but it works well, while being fairly verbose, even a little tedious.
@UJY23QLS1 that sounds similar to what I have come up with so far.