This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-02-14
Channels
- # beginners (19)
- # boot (11)
- # cider (59)
- # cljs-dev (292)
- # cljsrn (2)
- # clojure (121)
- # clojure-brasil (19)
- # clojure-canada (2)
- # clojure-france (2)
- # clojure-italy (57)
- # clojure-spec (54)
- # clojure-uk (20)
- # clojurescript (83)
- # core-async (20)
- # cursive (5)
- # datascript (2)
- # datomic (10)
- # duct (25)
- # editors (4)
- # emacs (2)
- # fulcro (5)
- # funcool (1)
- # graphql (2)
- # immutant (8)
- # java (1)
- # jobs (4)
- # jvm (1)
- # keechma (5)
- # luminus (10)
- # off-topic (113)
- # om (36)
- # onyx (11)
- # parinfer (55)
- # pedestal (7)
- # protorepl (28)
- # re-frame (25)
- # reagent (6)
- # ring-swagger (1)
- # shadow-cljs (113)
- # spacemacs (1)
- # specter (23)
- # unrepl (8)
- # yada (8)
@misha ha I don't quite think this is my use case. I might very well be misunderstanding here, but you're not using :m/merged
at all
And I guess what surprises me here is that the order of how the specs are passed to s/merge
is important.
I guess you could argue that since the order of how maps are passed to clojure.core/merge
is important, it follows that the order of the specs passed to s/merge
is also important, but I have a hard time accepting that 🙂
@slipset but you do accept clojure.core/merge, don't you? :)
(merge {:a 1} {:a 2}) ;;=> {:a 2}
(merge {:a 2} {:a 1}) ;;=> {:a 1}
if you s/merge s/keys with req/opt - order should not be an issue, but it might, if you merge req-un/opt-un
Which I find surprising, since this is not mentioned in the doc-string, and I don't see why there should be a difference.
(s/def :foo/a (s/or :i int?))
(s/def :bar/a int?)
(s/def :foo/m (s/keys :req-un [:foo/a]))
(s/def :bar/m (s/keys :req-un [:bar/a]))
(s/conform (s/merge :foo/m :bar/m) {:a 1}) ;;=> {:a 1}
(s/conform (s/merge :bar/m :foo/m) {:a 1}) ;;=> {:a [:i 1]}
If only specs were data and I could use Spectre to transform them...
Crazy idea: perhaps I could only have a spec for the final order and, when validating for a non-final step of the checkout, I could just filter out exceptions for paths I know this step doesnt require yet.
So, if I run a spec with 100 tests and it fails, if I input the seed number it will only also fail if I run 100 tests? The seed doesn’t seem to be for the single test that fails since if I change num-tests to 1 it will usually pass. This decreases the usefulness of seed for functions that take large data structures and take some time to run.
seeding a RNG just makes it generate a certain sequence of values, it doesn’t mean it’s going to magically find the exact value in the sequence that triggered some behavior and then always generate that number first. I assume the same logic applies to test.check generators?
anyway, when the check fails it spits out a minimal failing case that you could use in a “manual” test
assuming you don’t use an external source of randomness in your generators , re-using the seed from a test should reproduce the identical generator values and the identical failure
true, I think the disconnect is expecting the failing case to be re-gen’d first when using the same seed, instead of at whatever position it naturally occurs w/given seed
isn’t that nat-int?
?
I guess nat-int? goes to Long/MAX_VALUE
(nat-int? Long/MAX_VALUE) ;;=> true
(nat-int? (inc Long/MAX_VALUE)) ;;=> java.lang.ArithmeticException: integer overflow
well the error is occurring on the inc there, not the nat-int?
if you really want to restrict to java Integer ranges, I think I would make custom predicates and custom specs that use those predicates + appropriate generators
but Clojure is not going to give you that as it does not intend to support them
I am going through kafka's config documentation and writing spec for it.
Sometimes they use int, [0, ...]
, and sometimes int, [0, ... 2147483647]
explicitly
is there anyway to force say a large-integer
generator to produce unique values?
@firstclassfunc this might apply
yeah you have to specify it at the higher level, where you know the scope of uniqueness
otherwise you'll have trouble e.g., during shrinking when all your IDs become 0
yea i just want to produce a set of integers numbered 1-100 without duplicates
(gen/shuffle (range 100))
? (gen/set (gen/large-integer* {:min 1 :max 100}))
?
thanks @gfredericks not quite the shape I need it yet but appreciate it!
sometimes another useful approach is to just remove duplicates
yea that would help I am basically trying to create a common index across records e.g.
(defrecord Subscriber [name id])
(defrecord Subscribed [article id])
(s/def ::name string?)
(s/def ::article uuid?)
(s/def ::id uuid?)
(s/def ::Subscriber
(s/keys :req-un [::name ::id]))
(s/def ::Subscribed
(s/keys :req-un [::article ::id]))
(defn generate-subscribers
"Mock function to generate subscribers"
[]
(gen/sample (s/gen ::Subscriber)))
(defn generate-subscribed
"Mock function to generate subscribed"
[]
(gen/sample (s/gen ::Subscribed)))
I thought I would just narrow the scope of the generator for ::id to do it, but that has been more challenging.
if your IDs are UUIDs you shouldn't have any problems actually
the problem is that I need the set of ids
to be the same across each record to form a relationship
ah yeah that kind of thing takes more effort