Fork me on GitHub
#clojure-spec
<
2019-12-05
>
Lukasz Hall02:12:47

How fully is support for unqualified keywords baked into spec2? (Last example of https://github.com/clojure/spec-alpha2/wiki/Schema-and-select#unqualified-keys-1 fails with latest codebase)

(gen/sample (s/gen (s/select {:a int? :b keyword?} [:a])) 5)
Error printing return value (ExceptionInfo) at clojure.test.check.generators/such-that-helper (generators.cljc:320).
Couldn't satisfy such-that predicate after 100 tries.

seancorfield03:12:58

I expect that's just a bug in master -- or a doc bug.

seancorfield03:12:58

@lukaszkorecki Spec2 is very much a work-in-progress right now.

Quest09:12:38

Took a closer look at spec2. Really like ease of programmatic definition & similar API is a plus. Good vision & look forward to moving to it after CLJS support 🙂:thumbsup:

rickmoynihan10:12:01

In lieu of spec 2, is there any advice on how to structure spec 1 specs, so that they’re more likely to be easily portable to spec 2's schema / style?

rickmoynihan10:12:17

For example, defining essentially the schema (leaf) keys, and then essentially defining what would be the select ions on the fdefs.

rickmoynihan10:12:20

Obviously that can lead to some duplication… is there a way in spec 1 to dissoc an s/key :req from an s/keys spec?

rickmoynihan10:12:19

I can’t see such a thing; so I’m thinking it’s better to build the composites from multiple s/keys at the fdefsites with s/merge?

rickmoynihan10:12:53

i.e. it seems you can kind of simulate spec2's idiomatic structure with spec1

kszabo12:12:05

I had this discussion with Alex before, your ‘record types’ should all use :opt/`opt-un` , at usage sites (endpoints/fdefs) you can tighten them down by (s/merge ::user (s/keys :req [:user/name :user/age]))

rickmoynihan13:12:36

@thenonameguy: great, that’s what I’ve been doing. Thanks for the clarification. Is ‘record types’ a phrase Alex mentioned, or is it one you’ve coined (hence the bunnies?) I ask because I would’ve thought he would’ve said ‘schema types’ or just ‘schemas’, given that is the terminology used in spec2.

rickmoynihan13:12:12

Or is there some more nuance to that?

kszabo13:12:06

I just came up with it, case class/record type/schema, there are lots of names for arbitrary groupings of attributes. That is my conclusion from these spec2 developments: Aggregates are incidental and are context-dependent.

👍 4
kszabo13:12:34

And you can see the industry trend towards systems that are attribute-focused and not aggregate focused. Think of SQL vs Datomic for instance, folks appreciate the multi-dimensional flexibility of EAV* stores vs predetermined tables. There is also REST vs GraphQL where the predetermined ‘resource’ is inferior to getting just the attributes that you want.

👍 4
rickmoynihan13:12:56

Yeah, I do a lot of work with RDF; which is pretty much the poster child for property-based open-world thinking.

👌 4
alexmiller14:12:08

and the inspiration for parts of spec

rickmoynihan14:12:01

yeah and it really shows… same for clojure too actually :thumbsup:

rickmoynihan14:12:24

Incidentally I’m not sure if you or Rich had seen SHACL or SHEX; but they’re remarkably similar to spec (but for RDF): https://www.w3.org/TR/shacl/ https://shex.io/ I’ve often wondered if they were part of the inspiration behind spec, or whether spec and SHACL/SHEX just ended up in the same sort of place because of the nature of RDF / OWL. Either explanation seem plausible to me.

rickmoynihan14:12:25

Incidentally this (free) online book describes both of them and their usage, and their differences: https://book.validatingrdf.com/

kszabo13:12:11

Although you can take one more step and go to Pathom which is even more attribute-focused, there are no predetermined ‘types’ at all, very RDF-like

rickmoynihan13:12:16

I should check out pathom… it’s been on my radar for a while — but I’ve not really dug into it, so thanks for the pointer.

kszabo13:12:24

We are in the process of adding Pathom to all of our APIs so that we can later have a unified ‘maximal’ graph, it has been great so far

eggsyntax14:12:02

I thought there was a way to say in a map spec that a key could be either qualifed or unqualified. I misremembered that :req-un behaved that way, but apparently not:

all-test> (s/def :foo/bar string?)
:foo/bar
all-test> (s/def ::baz (s/keys :req-un [:foo/bar]))
:dw-domain-specs.specs.all-test/baz
all-test> (s/valid? ::baz {:bar "hi"})
true
all-test> (s/valid? ::baz {:foo/bar "hi"})
false
Is there any good way to say that either qualified or unqualified is fine? I'd like to be able to reuse the same spec both in internal contexts where keys are qualified and at the system boundary where keys come in unqualified. The simplest thing I've thought of is to use :req-un and always strip namespaces before checking for validity. But I thought that spec itself had a way to express that...

eggsyntax15:12:08

Oh well. Thanks, Alex! Might be an interesting feature to consider at some point, it'd make spec reuse easier and encourage people to use qualified keys when possible rather than defaulting to the least common denominator of unqualified keys (in cases where they have to deal with unqualified keys somewhere in their system).

alexmiller15:12:51

the intent with spec is to strongly encourage you to use qualified keys

alexmiller15:12:05

spec 2 has more support for working with unqual in schemas though

alexmiller15:12:30

unqualified that are not tied by name to qualified, but just directly to specs

✔️ 4
eggsyntax16:12:20

Meaning something like this?

(s/def :foo/bar string?)
(s/select {:bar :foo/bar} [:bar])

eggsyntax17:12:17

In a spirit of spec 2 brainstorming -- it seems like this case is similar to closed spec checking: something that it's better to avoid, but in some specific contexts turns out to be necessary. It would be cool if rather than write the boilerplate I gave above (which would be tedious and noisy in the case of an edge function that takes a map with a lot of unqualified keys) you could use an :unqualified setting with conforming API calls (just like with :closed):

(s/valid? ::foo {:bar "a"} {:unqualified true})
This also somewhat parallels the use of :keys in map destructuring for the common case where you want the binding names to be the same as the key names -- ie it handles the common case where the unqualified keys are the same as the names of the qualified keys. Just a thought 🙂

alexmiller18:12:45

again, trying to encourage qualified keys

eggsyntax18:12:50

Fair enough 🙂

alexmiller18:12:54

the settings are really for conform ops (valid? / conform / explain)

alexmiller18:12:07

unqualified would also need to be in play for gen

eggsyntax18:12:01

That makes sense. I was only suggesting it for valid? / conform / explain, but I see how gen breaks the parallelism with :closed in an important way.

eggsyntax16:12:49

> the intent with spec is to strongly encourage you to use qualified keys For sure, and I'm 100% bought into that idea. But sometimes at the edges you have to accept unqualified keys, and it'd be nice to be able to say, even on a case-by-case basis, "I'm looking for qualified keys, but I'm willing to accept unqualified" rather than (as currently) either a) making a separate unqualified spec for the same sort of data structure (eg a user), for use at the edges; or b) using a :req-un spec across the board and internally stripping namespaces from keys before you check for validity. What would you say is the best way to handle that situation?

alexmiller16:12:11

I think I'd either do a) or go the other way and add a pre-transformation to qualify the attributes so they match the qualified specs

👍 4
alexmiller16:12:41

b is stripping good information so I don't think I'd do that

4
alexmiller16:12:00

if you're going to change the data, at least make it better :)

eggsyntax16:12:33

Good point, and I should have thought of that since I do have places where I already do that 😆