This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-08-02
Channels
- # announcements (11)
- # aws (3)
- # babashka (34)
- # beginners (20)
- # biff (2)
- # calva (3)
- # cherry (29)
- # cider (6)
- # cljs-dev (9)
- # clojure (124)
- # clojure-europe (12)
- # clojure-norway (5)
- # clojure-uk (2)
- # clojurescript (32)
- # conjure (11)
- # datalevin (1)
- # datomic (16)
- # deps-new (1)
- # etaoin (6)
- # holy-lambda (10)
- # honeysql (28)
- # hyperfiddle (21)
- # jackdaw (2)
- # jobs (2)
- # leiningen (15)
- # missionary (12)
- # off-topic (132)
- # other-languages (1)
- # pathom (13)
- # rdf (10)
- # re-frame (8)
- # reagent (5)
- # releases (1)
- # remote-jobs (4)
- # shadow-cljs (32)
- # tools-deps (6)
- # vim (15)
- # xtdb (24)
Why does this complete?
(->> (range)
(map (apply comp (take 8 (cycle [inc inc dec]))))
(filter odd?)
(map (apply comp (take 7 (cycle [inc inc dec]))))
(filter even?)
(take 10))
;=> (8 10 12 14 16 18 20 22 24 26)
But this not?
(->> (range)
(map (apply comp (take 8 (cycle [inc inc dec]))))
(filter odd?)
(map (apply comp (take 8 (cycle [inc inc dec]))))
(filter even?)
(take 10))
; keeps going
Okay, this is a smaller minimal repro:
(->> (range)
(filter odd?)
(map (apply comp (take 8 (cycle [inc]))))
(filter even?)
(take 10))
Same for:
(->> (range)
(filter odd?)
(map (apply comp (repeat 8 inc)))
(filter even?)
(take 10))
vs
(->> (range)
(filter odd?)
(map (apply comp (repeat 7 inc)))
(filter even?)
(take 10))
It's not like comp
can't deal with the larger arity:
(->> (range)
(map (apply comp (repeat 200 inc)))
(take 10))
;=> (200 201 202 203 204 205 206 207 208 209)
(->> (range)
(map (apply comp (repeat 200 inc)))
(map (apply comp (repeat 200 inc)))
(map (apply comp (repeat 200 inc)))
(take 10))
;=> (600 601 602 603 604 605 606 607 608 609)
Repeat 7 inc versus repeat 8 inc on odd numbers will leave you with exclusively odd ot exclusively even numbers
Checks out:
(->> (range)
(filter even?)
(filter odd?)
(take 10))
; keeps going
Right, because filtering odd?
can't know it doesn't have any left, so it'll keep looking for 10 odds for us. Makes senseCan someone help me with the following clojure-spec issue:
(s/valid? spec-definition data)
=> false
(s/explain spec-definition data)
Success!
The validation reports as invalid and when I want to see what is wrong, I get "Success"? I really don't understand this...Can you give us a specific example that produces this result?
(def schema-attributes [{:name keyword?
:type string?
:cardinality (s/spec #{:one :many})
(ds/opt :unique) (s/spec #{:identity :value})
(ds/opt :index) boolean?
(ds/opt :doc) string?}])
(def schema-entities [{:name keyword?
(ds/opt :implements) [keyword?]
(ds/opt :doc) string?
(ds/opt :attributes) schema-attributes}])
(def schema-interfaces [{:name keyword?
(ds/opt :doc) string?
(ds/opt :attributes) schema-attributes}])
(def schema-enums [{:name string?
(ds/opt :doc) string?
:values [keyword?]}])
(def schema {:entities schema-entities
:interfaces schema-interfaces
:enums schema-enums})
Line 235 of data- I think you're calling map here- change (
into [
and ending )
into ]
.
@U01RL1YV4P7 that solved the issue, thanks! But I still don't understand why explain
did not tell me what was wrong?
explain
prints to standard out; what it returns is success/failure about that. Try explain-data
instead.
I have not looked at the spec yet, but generally I consider such a case of invalid but no explain data to be a bug
Quick poll: what would you use on a new project (votes start with 1 for easier access): • 🔴 clojure.spec • :large_yellow_circle: malli • 🔵 plumatic schema • :large_green_circle: other (explain)
I would like to hear reasons to use malli over spec, from people who voted on malli. I would use spec because it is the most native solution in Clojure. The simplest one without adding additional libraries. It works in clj and cljs and keep things consistent on FE and BE. The translation to human messages is easier, than people say. I don’t feel anything is missing here. Why use malli instead?
@U0WL6FA77 One reason is that malli is better readable. I mean clojure.spec is not a “Data DSL”
It is easy to instrospect and transform a malli schema, e.g. removing read-only fields from a map :
(def person
(m/schema [:map [:id :string]
[:name :string]])
(def w-person (mu/dissoc person :id))
This latter reason of being able to modify it at runtime seems to me to be both the biggest reason to use malli, and in its application something that would be almost entirely supplanted by spec2's selection. That said, I'm coming at this from the perspective of someone who would use spec over malli because of the built-in nature of it.
The whole data dsl over it I wouldn't consider to be a large problem because it would be almost trivial to write over spec, but if that's such a large reason people prefer it, then maybe I should just write a library that provides malli spec syntax over clojure.spec.
not to be down on malli, it's a very cool library, I'm just surprised that appearance is such a big factor.
> being able to modify it at runtime It doesn’t sound right to me, but maybe there are use cases which I didn’t think about.
Another point in favor (?) of malli is it also does bi-directional coercion (e.g. decode and encode) and the ability to compose and provide your own transformers And the performance is great if you care about that
> Can you give a specific use case when it makes a real difference vs spec?
[:map
[:a [:map [:b string?] [:c int?]]]
[:d [:vector [:map [:e keyword?] [:f [:vector int?]]]]]]
or
(s/def ::a (s/keys :req-un [::b ::c])) (s/def ::b string?)
(s/def ::c int?)
(s/def ::d (s/coll-of (s/keys :req-un [::e ::f])))
(s/def ::e keyword?)
(s/def ::f (s/coll-of number?)) (s/def ::data (s/keys :req-un [::a ::d]))
If you are comfortable reading spec in this case, Okay:slightly_smiling_face:Honestly this just feels like a desire for a dsl macro. It could be made pretty easily by just constructing namespaces for keys programmatically to prevent conflicts between different nested structures. All that said, I think part of the goal with spec is to lean more into flat structures than nested ones.
This works better when you're not dealing with horrendous gnarly APIs you don't control
https://clojurians.slack.com/archives/C03S1KBA2/p1659460194699519?thread_ts=1659434539.729849&cid=C03S1KBA2
So first of all the spec definition in your example allows you to validate each of the value right away. Like for example ::c
while your malli example can’t do it, because it is 1 definition of whole data structure. This let you use each this spec right away with functions input / output.
That is why this compression is unfair, because both of codes have a little different effect.
And what is something also really important for me I can generate right away random data structure based on spec for test purpose. Random tests are really helpful to discover corner cases.
So on the end this 2 examples on the first look are “equal”, but achieve 2 different things, which are not equal.
spec give more extended solution here, because you can use it to generate random data like (gen/generate (s/gen ::spec-labels/labels-orders))
, to validate input / output of the functions and you can use each of field, map, vector separately and as a whole structure.
In malli you can put all these in a registry, like
{::a int? ::b float? ::c [:map ::a ::b]}
You can set a mutable registry as the default registry and work like specMalli can generate random data too but I originally wrote about code readability. Can you write my example with spec better? I use spec, but very rarely
> Malli can generate random data too But then it will be not as simple example as we compared before.
Well, after all I think your s/def are ok, but because of ::a
and ::b
it doesn’t look good.
Sure it will, if you split your definitions up.
I often define map value schemas with def
then refer to them in the map definition
this is a little more complex to explain, normally I don’t do something like (s/def ::a int?)
, but there is (s/def ::id int?)
and I use this ::id
in other more complex structures.
> Sure it will, if you split your definitions up.
> I often define map value schemas with def
then refer to them in the map definition
Yes, but it’s still more readable. (my humble opinion) 🙂
You are right, my example is not an example of best practice. You can come up with any of your own. But I don’t think it gets any better.
Not an educated opinion but maybe it's useful to know the uneducated opinion: Malli seems more repl/runtime friendly and less opinionated re keywords in maps, separately registering everything, plus I've heard of metosin and used other stuff. I've used Malli for two small projects and spec never beyond trying out
I think spec
is much more inspired by RDF/OWL and semantic graphs but malli
is just practically too useful and isn't in alpha
For schema-as-data in clojure.spec, one can use data-spec: https://cljdoc.org/d/metosin/spec-tools/0.10.5/doc/data-specs
@U013100GJ14 re: data-specs, malli has https://github.com/metosin/malli#lite, which is zillion time better than data-spec: no hacks, no bugs, not leaning on spec internals, does not leak memory, orders of magnitude faster.
really happy to see malli get so many upvotes 🙇. We (at Metosin) have all those libs running in prod, but all new projects use Malli.
quick search on github will find interesting results https://github.com/helins/wasm.cljc/blob/8eef04fb70733be3d5ca5687e6597e0b9e0af0d0/src/main/helins/wasm/schema.cljc
I know malli and spec look similar, but I personally think they’re really tools for two different jobs. I’d use malli for validation and coercion at the edges of my application, e.g. on HTTP-sent JSON bodies, and I’d use spec for defining the contracts internally in my code
While malli feels like it lacks ergonomics for internal code contracts, I wonder why not use the same tool across the code base? You'll have some impedance mismatch down the line
payload is usually unqualified keys
spec is not good at this area
also, spec may be vunerable to ReDoS
-like attacks: depending how you create your non-simple spec's, depending on the input, it may take too long to validate/fail.
to check payload data I would use json-schema-compatible tools, like malli.
but for functions, I would always use spec.
>> spec is not good at this area > Can you precise? No, but I tried to use for a while. Many things are hard to do. sometimes you need to create a dump/fake namespace just to specify a new keyword. Error messages are way worst.
> https://quanttype.net/posts/2021-03-06-clojure-spec-and-untrusted-input.html Ah I see what you are talking about. I would consider this more like a developer bug, not like a spec issue.
> Many things are hard to do. sometimes you need to create a dump/fake namespace just to specify a new keyword. Error messages are way worst. I can’t really compare, because I didn’t try malli. Thank you for the input.
Creating "fake" namespaces is somewhat of a concern, but a lot of that pain went away with :as-alias
in 1.11
Personally (and this is definitely a personal opinion) I like using namespaces (even ones that contain no code) to represent the relationships between parts of my data.
The untrusted input article is very useful, I was unaware that it would validate keys outside the set for the given map. It seems pretty solvable with just a select-keys
, but really important to know about. Thanks for sharing!
I agree with you Jushua Two libs that may help you to qualify the key and have good error messages. https://github.com/souenzzo/eql-as https://github.com/molequedeideias/eql-inspect
Can someone please explain the rationale behind metadata :arglist
taking priority over the actual arglist of the function?
The actual signature of a function can be just [& args]
while :arglists
can be [x? y? z?]
.
clojure.core
has a few examples of that.
the point is to serve as an explicit override to convey something more
Why not just put it in the actual function signature? Is this mostly a tool for maintaining backwards compatibility while increasing the scope of the function's behaviour?
Sometimes (with macros especially) it's more convenient to write something like [& args] in the function signature but really mean something more specific
:arglists is documentation
Okay, that makes sense. One last question on this topic: is prioritizing :arglists a strictly followed convention?
I don't think it's a convention, it's what Clojure does (in the very few places that actually use arglists)
I've seen it used elsewhere, like in https://github.com/henryw374/cljc.java-time/blob/master/src/cljc/java_time/instant.clj.
Often the arglists metadata is pseudo-code, like use of *
to indicate multiple possible arguments
My guess: that code was generated. And being generated it has “ugly” args like
[^java.time.Instant this14077 ^java.time.temporal.TemporalQuery java-time-temporal-TemporalQuery14078]
And here the arg lists were updated to give type information:
["java.time.Instant" "java.time.temporal.TemporalQuery"]
instead of this14077
and j-t-t-TQ14078
yes, code gen is a common use case for :arglists
Hmm, okay. I guess there isn't a clear path for interpreting :arglists and it's kind of a case-by-case thing. It's documentation, but not much more can be said about it without looking at its use in its codebase.
I think there are a couple places in the compiler that do look at arglists but they are pretty minimal
What do you mean “isn’t a clear path for interpreting :arglists” . What are you trying to do that you see tension in this?
I'm making a little code exploration tool and I'd like to be able to map function calls to specific arities. If different libraries use :arglists in an inconsistent matter, it seems to complicate the task.
@UPWHQK562 you're using clj-kondo analysis for this right? why do you need arglists again? clj-kondo also provides all the possible arities of a function as data
@U04V15CAJ that's right, I'm using clj-kondo analysis data.
I'd like to have a sort of "function profile" which lists a function and its arities along with the argument names. In the case of https://github.com/clojure/clojure/blob/clojure-1.10.1/src/clj/clojure/core.clj#L289 for example, the :arglists
contains the list of arguments you would expect, while the :arglist-strs
doesn't have information I would consider useful for most users:
{:ns clojure.core,
:name defn,
:doc
"Same as (def name (fn [params* ] exprs*)) or (def\n name (fn ([params* ] exprs*)+)) with any doc-string or attrs added\n to the var metadata. prepost-map defines a map with optional keys\n :pre and :post that contain collections of pre or post conditions.",
:fixed-arities #{},
:varargs-min-arity 3,
:arglist-strs ["[&form &env name & fdecl]"],
:meta
{:arglists
'([name
doc-string?
attr-map?
[params*]
prepost-map?
body
[name]
doc-string?
attr-map?
([params* prepost-map? body])
+
attr-map?])}}
Yep, it's there. The thing I've learned from this thread is that I can't expect its usage to be consistent. In clojure.core it's the authoritative list of arguments, but it's not really enforced and downstream libraries may treat it differently.
This is an issue that I've bumped heads with the Eastwood maintainers about over the years -- as I recall, Eastwood relies on :arglists
to check calls and gets confused if that metadata is not actually a valid Clojure argument list or doesn't directly match the intended usage. I've always treated :arglists
as "documentation" to help editors offer better guidance for how to call functions -- not as actual argument lists for the functions.
it will also read the pom in the jar and use it to pull dependencies
In order to confirm that streaming data with cursors works (clojure.java.jdbc) I need to measure memory consumed by the query output If streaming works as expected - it should be behave similar to lazy seqs - eg hold only one chunk of data Is there a simple way to do it? Please advice Thanks!
you could use clj-memory-meter
Thanks