Fork me on GitHub
#clojure-spec
<
2018-02-14
>
slipset08:02:47

@misha ha I don't quite think this is my use case. I might very well be misunderstanding here, but you're not using :m/merged at all

slipset08:02:08

And I guess what surprises me here is that the order of how the specs are passed to s/merge is important.

slipset08:02:25

I guess you could argue that since the order of how maps are passed to clojure.core/merge is important, it follows that the order of the specs passed to s/merge is also important, but I have a hard time accepting that 🙂

misha08:02:10

@slipset but you do accept clojure.core/merge, don't you? :)

(merge {:a 1} {:a 2})  ;;=> {:a 2}
(merge {:a 2} {:a 1})  ;;=> {:a 1}

slipset08:02:28

absolutely

misha08:02:34

if you s/merge s/keys with req/opt - order should not be an issue, but it might, if you merge req-un/opt-un

slipset08:02:32

Which I find surprising, since this is not mentioned in the doc-string, and I don't see why there should be a difference.

misha08:02:16

first s/keys will have :foo/a, seconds – :bar/a, which will be conformed differently

misha08:02:15

then, conformed values will be merged, and the last one will win

misha08:02:10

(s/def :foo/a (s/or :i int?))
(s/def :bar/a int?)
(s/def :foo/m (s/keys :req-un [:foo/a]))
(s/def :bar/m (s/keys :req-un [:bar/a]))

(s/conform (s/merge :foo/m :bar/m) {:a 1})  ;;=> {:a 1}
(s/conform (s/merge :bar/m :foo/m) {:a 1})  ;;=> {:a [:i 1]}

holyjak10:02:21

If only specs were data and I could use Spectre to transform them...

mpenet10:02:22

@holyjak there are plans to improve that I think

mpenet10:02:56

nothing specific was mentioned but they stated the intent a few times

holyjak10:02:44

Crazy idea: perhaps I could only have a spec for the final order and, when validating for a non-final step of the checkout, I could just filter out exceptions for paths I know this step doesnt require yet.

mpenet10:02:06

my comment was about spec composition, not about the merge oddities

vikeri10:02:10

So, if I run a spec with 100 tests and it fails, if I input the seed number it will only also fail if I run 100 tests? The seed doesn’t seem to be for the single test that fails since if I change num-tests to 1 it will usually pass. This decreases the usefulness of seed for functions that take large data structures and take some time to run.

taylor13:02:48

seeding a RNG just makes it generate a certain sequence of values, it doesn’t mean it’s going to magically find the exact value in the sequence that triggered some behavior and then always generate that number first. I assume the same logic applies to test.check generators?

taylor13:02:31

anyway, when the check fails it spits out a minimal failing case that you could use in a “manual” test

alexmiller14:02:21

assuming you don’t use an external source of randomness in your generators , re-using the seed from a test should reproduce the identical generator values and the identical failure

taylor14:02:50

true, I think the disconnect is expecting the failing case to be re-gen’d first when using the same seed, instead of at whatever position it naturally occurs w/given seed

misha13:02:53

@holyjak specs are sort of data, like everything in clojure opieop

misha13:02:30

Now it just comes down to how much pain verbosity and macros you can tolerate

misha15:02:10

is there a builtin predicate for int in Integer/MIN_VALUE, Integer/MAX_VALUE range?

madstap16:02:18

@misha I think that's what int? is

mgrbyte16:02:50

(source int?) - is a check against type (Long/Integer/Short/Byte), not bounds.

misha16:02:16

@madstap int? includes longs as well

misha16:02:05

guess I'll use (s/int-in 0 (inc Integer/MAX_VALUE))

alexmiller16:02:02

isn’t that nat-int? ?

alexmiller16:02:14

I guess nat-int? goes to Long/MAX_VALUE

misha16:02:42

(nat-int? Long/MAX_VALUE)        ;;=> true
(nat-int? (inc Long/MAX_VALUE))  ;;=> java.lang.ArithmeticException: integer overflow

alexmiller16:02:35

well the error is occurring on the inc there, not the nat-int?

misha16:02:37

Long/MAX_VALUE
=> 9223372036854775807
(nat-int? 9223372036854775808) ;;=> false

alexmiller16:02:52

if you really want to restrict to java Integer ranges, I think I would make custom predicates and custom specs that use those predicates + appropriate generators

alexmiller16:02:15

but Clojure is not going to give you that as it does not intend to support them

misha16:02:55

I am going through kafka's config documentation and writing spec for it. Sometimes they use int, [0, ...], and sometimes int, [0, ... 2147483647] explicitly

firstclassfunc20:02:09

is there anyway to force say a large-integer generator to produce unique values?

gfredericks20:02:53

yeah you have to specify it at the higher level, where you know the scope of uniqueness

gfredericks20:02:26

otherwise you'll have trouble e.g., during shrinking when all your IDs become 0

firstclassfunc20:02:36

yea i just want to produce a set of integers numbered 1-100 without duplicates

gfredericks20:02:36

(gen/shuffle (range 100))? (gen/set (gen/large-integer* {:min 1 :max 100}))?

firstclassfunc20:02:49

thanks @gfredericks not quite the shape I need it yet but appreciate it!

gfredericks20:02:41

sometimes another useful approach is to just remove duplicates

firstclassfunc20:02:46

yea that would help I am basically trying to create a common index across records e.g.

(defrecord Subscriber [name id])
(defrecord Subscribed [article id])

(s/def ::name string?)
(s/def ::article uuid?)
(s/def ::id uuid?)

(s/def ::Subscriber
  (s/keys :req-un [::name ::id]))

(s/def ::Subscribed
  (s/keys :req-un [::article ::id]))


(defn generate-subscribers
  "Mock function to generate subscribers"
  []
  (gen/sample (s/gen ::Subscriber)))


(defn generate-subscribed
  "Mock function to generate subscribed"
  []
  (gen/sample (s/gen ::Subscribed)))

firstclassfunc20:02:46

I thought I would just narrow the scope of the generator for ::id to do it, but that has been more challenging.

gfredericks20:02:17

if your IDs are UUIDs you shouldn't have any problems actually

firstclassfunc20:02:32

the problem is that I need the set of ids to be the same across each record to form a relationship

gfredericks21:02:24

ah yeah that kind of thing takes more effort