This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-10-09
Channels
- # announcements (3)
- # babashka (63)
- # beginners (55)
- # calva (14)
- # cider (12)
- # clj-commons (20)
- # clj-kondo (22)
- # clojure (149)
- # clojure-europe (4)
- # clojurescript (25)
- # community-development (3)
- # conjure (9)
- # datomic (5)
- # emacs (2)
- # fulcro (2)
- # hyperfiddle (6)
- # lsp (23)
- # nbb (4)
- # pedestal (2)
- # reagent (26)
- # releases (3)
- # sql (3)
- # xtdb (6)
Any thoughts on how to manage "releases" of libraries that are git deps only? For example, how to announce a new release, or how to structure Release notes? Release notes are pretty much just the git history in this case. I suppose I could tag particular commits anyway and use that. What are others doing in this case?
Here's what I do https://github.com/seancorfield/build-clj/blob/main/CHANGELOG.md and https://github.com/seancorfield/build-clj/releases and https://clojurians.slack.com/archives/C06MAR553/p1656450734481269 (or announce in #releases)
So I treat it just like a regular library, except its coordinates are different.
Makes sense. Thanks, @U04V70XH6
Only annoying thing is, you have to update the readme after for the git sha. So at first I just do :git/sha "..."
and then you have to push a new commit with the sha updated.
Yup, same here. https://github.com/seancorfield/build-clj/commit/7081997c3a21f50e96ee98a20f3bd97be9064a6d for example.
Can I somehow set the default namespace to something else than user
when booting up a repl with clj
?
and a thread from earlier about this: https://clojurians.slack.com/archives/C03S1KBA2/p1665251997637329
Thanks @U11BV7MTK! There's actually a good point on https://practical.li/clojure-staging/alternative-tools/clojure-tools/set-namespace-on-repl-startup.html: > It is not necessary to set the namespace when evaluating code in a Clojure aware editor. Expressions are evaluated within the scope of the namespace in which they are defined.
I pretty much never type into the REPL -- my editor takes care of ensuring the right ns is used for evaluation.
(I mostly have the REPL view in my editor hidden or minimized and use tap>
to send interesting values to Portal for visual inspection)
On windows, (clojure.java.shell/sh ...)
crashes no matter what code I run. Is there a reason for this?
; Execution error (IOException) at java.lang.ProcessImpl/create (ProcessImpl.java:-2).
; CreateProcess error=2, The system cannot find the file specified
Try this: (clojure.java.shell/sh "cmd" "/C" ...)
. Some commands even work without that- for example (sh "notepad.exe")
That worked, thank you. I'll have to set up a function that checks the environment to append "cmd" "/C"
@U042LKM3WCW What was the concrete invocation, i.e. the arguments on the dots? It's probably best to diagnose the root cause than to work around it
Anything crashed on windows without Martins suggestion
(sh "ls)
or (sh "dir")
crashed
On windows those are built in commands of cmd. Things like notepad work because they’re executables in some directory on the PATH
And on Linux is ls
a built in of bash or is it just ls
?
Some things are both a shell built-in and a binary:
$ bb -e '(str (fs/which "ls"))'
"/bin/ls"
Alright. I was using kit-framework
and it was using linux/unix specific commands for its build files
Like sh cp
and things like that
Ah I see. If you use a library like https://github.com/babashka/fs there is no need to shell out for copying
I know babashka normally as a sort of "clojure to shell script" compiler. Is there a way I can use it for platform agnostic copying in my builds?
btw babashka isn't a clojure to shell script compiler either, but that doesn't matter for the sake of this discussion ;)
Ah my bad
Looking through babashka/fs
, the API has a copy, but I'll also need to run an npx script. Is there anyway to do that or should I just use (windows?
One of the commands I'm running is (sh "cmd" "/C" "npx" "shadow-cljs" "release" "app")
. Maybe there's a shadow-cljs clojure call I can make instead
@U042LKM3WCW There is another library for this, called babashka.process
:
https://github.com/babashka/process
You can use (process/shell "npx shadow-cljs release app")
and this should work cross-platform, whether the binary is called npx.cmd
or npx.exe
or npx
That's awesome!
Thank you
Is there a work-around for specifying referential defaults for keyword arguments to a function when there are 9 or more key/value pairs? For example:
(defn panda
[& {:keys [a b c d e f g h]
:or {a 1
b a
c b
d c
e d
f e
g f
h g}}]
[a b c d e f g h])
panda happily compiles. You can create default values that refer to previously defined symbolics. Presumably, as the backing for the :or
hashmap is an instance of clojure.lang.PersistentArrayMap
.
However, simply add one additional keyword argument:
(defn sad-panda
[& {:keys [a b c d e f g h i]
:or {a 1
b a
c b
d c
e d
f e
g f
h g
i h}}]
[a b c d e f g h i])
and the function fails to compile, giving the syntax error: Unable to resolve symbol: h in this context
Hash maps have no order so you're just "lucky" that the smaller example works -- because it is implemented using an array map under the hood and the keys stay in order.
I believe this is failing due to the optimizing behavior of hashmap class selection (one of my few peeves with Clojure's core design)
Once you have a bigger hash map, the order is random, so you cannot have one key's default be another key's value.
I understand why, I wonder if there is a simple work around to force instantiation of clojure.lang.PersistentArrayMap
What you're trying to do isn't intended to work and I think it's a bug that it does for small cases.
Oh, self-referential defaults are only a side-effect and not intended behavior?
That is really weird to think of that as a bug
Correct. I believe I've seen Alex confirm this in another thread when the question cropped up here before.
The destructuring code could detect this and throw an exception -- but that would also slow it down for everyone. It's one of those "garbage in, garbage out" cases where the behavior is deliberately undefined (but just happens to produce the result you want sometimes).
I think that is just a justification for the hashmap behavior of clojure.lang.PersistentHashMap
That would impact the performance of a lot of code -- and it would still "sometimes work" which would be even worse.
I think clojure would benefit from greater core control over the instantiation classes. There is a way to have your cake and eat it too here
(since it would then depend on the key names and hashes, not the ordering)
There are many contexts where array-map
is preferred behavior, and you need to perform gymnastics around clojure.lang.PersistentHashMap
> That is really weird to think of that as a bug
Maps are unordered. Even though the keys are ordered in the source code, the clojure compiler doesn't eval
text, it evals data structures. So even before eval
, the keys are unordered when the source code is read
.
They are only unordered in clojure.lang.PersistentHashMap
I've been doing Clojure in production for about twelve years at this point and, in my experience, this is a non-issue. Destructured hash map defaults cannot depend on each other is the intention of the design.
They are ordered in clojure.lang.PersistentArrayMap
We all know you are very experienced Sean 😄
out of curiosity, what happens if you add a reader tag in front of the map?
eg. use https://github.com/clj-commons/ordered and add #ordered/map
I think it would be great if clj-kondo
or Eastwood could detect and flag this as unsafe code.
i think using a let binding is much better than a huge :or
though that is a bit context sensitive (i have 100+ line destructuring where using inline :or
s seemed to help clean things up). i think :or
with dynamic/variable binding is weird, though it's possible that these symbols are via config. i've never seen :or
with self referencing ever. that's really for let bindings
I am writing this on the weekend, and definitely not going to run a linter on it, since that is something I do with code during the week
@U7RJTCH6J IIRC, the read map will be a hash one, so your ordered/map
reader can't really figure out the order unless it reads the source file.
that is a reasonable conclusion to come to. I was hoping to avoid the extra let. I think it is more succinct and clean to support this in the :or binding
for the most part i agree with the :or
being a good place to do this, if you can get rid of the self referencing stuff then it's probably ok. you could probably do that via a macro
You can do it with a macro indeed, but it will expand into a let
, without any map literals.
> I think it is more succinct and clean to support this in the :or binding
Succinct - yes. Clean - not sure what exactly it means.
An argument against it is that supporting ordered self-referential destructuring requires either new syntax or changing the {}
reader. In either case, it's pretty much a no-go given how niche and rare such a requirement is.
You can substitute clear/explanatory for clean
There's a big "gotcha" with :or
that a lot of people trip up on: if you have {:keys [a b c] :or {a 1} :as opts}
and you pass {:b 2 :c 3}
but use opts
in the function, it won't have :a 1
because the defaults only apply to the named keys, not to the full hash map. Where you want the whole hash map to have those defaults, you have to use let
/`merge` or something similar in the function body.
Localizing the default logic to the :or
form makes sense
I assumed that :keys
was evaluated first, and provided symbols in order, as it is a vector
you wouldn't need to change the {}
reader. In theory, it seems like the following could be supported (I'm not saying it should be supported).
(defn sad-panda
[& {:keys [a b c d e f g h i]
:or
#ordered/map
[[a 1]
[b a]
[c b]
[d c]
[e d]
[f e]
[g f]
[h g]
[i h]]}]
[a b c d e f g h i])
But it doesn't currently work (the reader tag does return an ordered map, but the destructuring order doesn't match).The backing for :or
wouldn't need to matter
@U7RJTCH6J i feel that a let binding is probably a lot better at conveying the idea than that solution, but that is pretty cool that it can be done.
Yea, I'm not saying it's a good idea. I've stopped using :or
and prefer to use let
in all cases.
"I assumed that :keys was evaluated first" -- since the whole thing is a hash map, there's no order for :keys
, :or
, :as
either... [edited to remove the hand-waving about semantics and ordering because it's not really relevant]
> Localizing the default logic to the :or
form makes sense
It will complicate the logic of the map reader and incur some extra cognitive load.
Right now, it's as simple as it can possibly be - a map reader produces a map. That's it, it never promises any order or any specific map type. And the fact that under 9 elements it happens to be an array map is nothing but an implementation detail.
No other literal has any context-dependent meaning. Lists are lists, vectors are vectors, and so on.
If maps are treated in a special way, apart from just complicating things, it will also establish a dangerous precedent.
Yeah, I'm with @U7RJTCH6J -- I hardly ever use :or
except in very limited cases where I know I'm only going to be dealing the specified keys, not the whole map, and the defaults are all simple literal values.
I don't use :or
often in production
.. in my hobby code I have a lot of parameters typically. Since this is the first time I have crossed the threshold, I have only now realized that that symbol reference in :or forms isn't actually supported.
I clearly missed the memo that it was undefined behavior. I can clearly see how it could be supported with the current declaration structure of defn. But I am glad I see the truth now
Don't try this at home! Anyway, this seems to work:
(defn sad-panda
[& #ordered/map
{:keys [a b c d e f g h i]
:or
#ordered/map
[[a 1]
[b a]
[c b]
[d c]
[e d]
[f e]
[g f]
[h g]
[i h]]}]
[a b c d e f g h i])
I tested thousands of permutations of the keys
(def binding '[a b c d e f g h i])
(defn ordered-binding [keys]
(ordered-map
(cons [(first keys) 1]
(map (fn [prev next]
[next prev])
keys
(rest keys))))
)
(defn binding->fn-code [keys]
(let [or-binding (ordered-binding keys)
m (ordered-map
:keys keys
:or or-binding)
fn-code `(fn [~m]
~keys)]
fn-code))
(doseq [keys (repeatedly 5000 #(shuffle binding))]
(let [f (eval (binding->fn-code (vec keys)))]
(assert (= [1 1 1 1 1 1 1 1 1]
(f {(-> keys first keyword) 1})) )))
I would try that at home, but not at work 😄
The values of :or default maps should not be expressions that depend on the locals you are binding, anything else is undefined and it’s behavior may change in the future
Any appearance that this works now is implementation details leaking, not intentional behavior
Thanks for confirming Alex!
I deliberately did not @ you on a Sunday -- but thanks for chiming in, Alex! 🙂
(defn panda
[& {:keys [a b]
:or {a 1
b (inc a)}}]
[a b])
(defn sad-panda
[& {:keys [a b]
:or {a 1}}]
(let [b (or b (inc a))]
[a b]))
will stay with sad-panda
There are more details in this clj-kondo issue: https://github.com/clj-kondo/clj-kondo/issues/916
@U0E2268BY I'm curious about this comment: https://clojurians.slack.com/archives/C03S1KBA2/p1665337162844769?thread_ts=1665334951.512419&cid=C03S1KBA2 -- do you write hobby code in a very different style to production code? I know this is a bit of a tangent from the original thread so if you want to drill down into this in a new thread or a different channel (or even via DM), that would be cool.
Symbol reference is fine but you should not consider the symbols being bound in the destructuring to be in scope yet. And I wouldn’t consider it wrong for Clojure to change the map type of that :or map to a built-in map type in the service of optimization.
So that's fine because... :keys
is used first by destructuring to create a let
binding in the same order as the symbols in the vector? What about destructurings that have multiple :keys
or mix :strs
or :syms
? Presumably there's a much less clear ordering there -- although I would still expect each of those vectors to all be processed "first" and then any :or
clauses?
sorry, I deleted my messages, but the gist of it was: :or can refer :keys bindings, but not bindings re-defined or new bindings in :or itself
due to how it's currently implemented - there are no ordering issues, but if core says this isn't even supported, I'm fine with that
{:keys [a b c] :foo/keys [d e f] :or {d (inc a)}}
-- is that defined behavior? (I would expect not)
user=> ((fn [{:keys [a b c] :foo/keys [d e f] :or {d (inc a)}}] [a b c d e f]) {:a 42 :foo/f 13})
[42 nil nil 43 nil 13]
user=> ((fn [{:keys [a b c] :foo/keys [d e f] :or {b (inc f)}}] [a b c d e f]) {:a 42 :foo/f 13})
Syntax error compiling at (REPL:1:57).
Unable to resolve symbol: f in this context
user=>
(edited to show :foo/f
in both examples)(I'm not surprised by this but it makes me think the {d (inc a)}
is also only working "by accident")
Unpacking Destructuring seems like it would make for a great blog post. You could even cover stuff like:
> (let [[ & [ & [& [& [& [& hi]]]]]] [42]]
hi)
(42)
> (let [[& {:as m}] [:a 42]]
m)
{:a 42}
(and then there's the difference introduced in Clojure 1.11 🙂 )
@U04V70XH6 if you have enough kv pairs in that map binding example earlier, the order of evaluation for those key sets may change. In short, the expression side of an :or map should not rely on locals being bound in the same destructuring map and you should not make assumptions about when those or expressions are evaluated (or even whether they are evaluated at all)
@U064X3EF3 Thanks. That's exactly what I expected as far as undefined behavior is concerned and why I wanted to show the "two :keys
" example, even with "small" hash maps. I think it's a good test case for clj-kondo
too so I'll add it to that issue.
please make it a warning
or suggest in the issue
@U04V70XH6 I don't like many of the opinions found, particularly at the warning level, of linters
and on the weekends I really don't want to have to fight them in my code
linters are typically curated by an individual, and sometimes the configuration philosophy of that curator leans on the side of more work for the linter user. Since we are speaking of how I'd like to spend my spare time, I lean toward the protective
Interesting. I'm always curious about people's workflows and how/why they might differ by context. I use my laptop for OSS / hobby stuff but my desktop for work so they're "physically separate" but I have my VS Code environment sync'd across the two machines so there's no "shifting of gears" between how I work and how I "play" -- with the exception of GitHub Copilot only being active on the laptop (since I'm free to use it as an OSS maintainer, but it would be a paid service if I used it for work). I used to have different editor environments set up on home and work but it was very jarring...
I do use the same development config, but obviously auto-linting is not a part of my workflow. Perhaps if I invested the time to get a permissive enough config, and perhaps whitelisting require '[clojure.test :refer :all]
and eliminating unused symbol warnings, I could add it to my editor.
I do like to use linters as an advisor rather than a girdle.
Cool. I love clj-kondo
"nagging" me, even when I'm just playing around, as it reinforces the practice of writing "better" code, but I get that some people just don't like that much "over the shoulder critique" when they're typing 🙂
(and I have clj-kondo
dialed up from its defaults to be even more strict about several things)
The latter concern, unused symbol warnings, I feel is an important one, particularly when regarding kwargs. I tend to declare unused :as
symbols, for documentation purposes, and I feel that clj-kondo
is too binary about that by default. I eliminate unused symbols purposefully in other contexts, but again I prefer advisors to girdles. Error on linter warning in circleci is very irksome.
The :as
can be configured (as can several other binding things like defmethod bindings)
Ya, I'll look that one up and stop complaining 😄
When I started clj-kondo some of those things might have been too opinionated, but nowadays all new opinionated stuff is :off by default (or should be)
linter specific docs are here: https://github.com/clj-kondo/clj-kondo/blob/master/doc/linters.md
I find (opinion, I know) :refer :all to be confusing, even in README.md of libraries that introduce the lib with examples that start with:
(require '[mylib :refer :all])
(foobar :dude :update-fn baz)
I'm like... eh where do things come from... can't you please just use an alias or :refer [...]? But yeah, that might just be me ;)wow, it even says which can be useful for documentation
nice! I should actually read more of your documentation borkdude! Been focussed on babashka and its libraries (big fan of babashka.fs)
alex has already weighed in, but I want to reiterate that 1) :or
maps do not introduce symbol bindings, and 2) mappings have no order, so :or
maps have no order
Why is the following not passing the stest/check
?
Why does the following not pass the test:
(defn plus1 [x] (+ 1 x))
(s/fdef plus1
:args (s/cat :x number?)
:ret number?
:fn #(= (+ 1 (-> % :args :x)) (:ret %)))
(stest/abbrev-result (first (stest/check `plus1)))
;; => {:spec (fspec :args (cat :x number?) :ret number? :fn (= (+ 1 (-> % :args :x)) (:ret %))),
;; :sym bank-processor.core-test/plus1,
;; :failure
;; {:clojure.spec.alpha/problems
;; [{:path [:fn],
;; :pred (clojure.core/fn [%] (clojure.core/= (clojure.core/+ 1 (clojure.core/-> % :args :x)) (:ret %))),
;; :val {:args {:x ##NaN}, :ret ##NaN},
;; :via [],
;; :in []}],
;; :clojure.spec.alpha/spec
;; #object[clojure.spec.alpha$spec_impl$reify__2060 0x241aa6a5 "clojure.spec.alpha$spec_impl$reify__2060@241aa6a5"],
;; :clojure.spec.alpha/value {:args {:x ##NaN}, :ret ##NaN},
;; :clojure.spec.test.alpha/args (##NaN),
;; :clojure.spec.test.alpha/val {:args {:x ##NaN}, :ret ##NaN},
;; :clojure.spec.alpha/failure :check-failed}}
Maybe change number?
to int?
assuming you only want plus1
to work on integers?
p-himik: any idea why {:args {:x ##NaN} ... }
? The spec tells it that it should be a number
Ah, … let me try.
user=> (for [n [1 1.0 ##NaN ##Inf]] (int? n))
(true false false false)
user=> (for [n [1 1.0 ##NaN ##Inf]] (double? n))
(false true true true)
user=> (for [n [1 1.0 ##NaN ##Inf]] (number? n))
(true true true true)
user=>
Thanks! That was the reason. The following works:
(defn plus1 [x] (+ 1 x))
(s/fdef plus1
:args (s/cat :x (s/and number? #(not (Double/isNaN %))))
:ret number?
:fn #(= (+ 1 (-> % :args :x)) (:ret %)))
It is a bit clunky, though.
Ah, I guess (plus1 ##Inf)
is still ##Inf
so that passes too...
user=> (= ##Inf ##Inf)
true
So that’s no problem as the check will pass as well. It’s just that (= ##NaN ##NaN)
is false
. Which is weird.
It's similar to NULL
in SQL. NULL
is not equal to NULL
there.
NULL
is unknown information, and something unknown cannot be equal to something unknown.
NaN
is "not a number" - and something that's not a number cannot be equal to something else that's not a number.
Perhaps there's more appropriate technical or philosophical explanation, but that one for some reason got stuck in my mind.
Your test is not very useful though, it's just testing the same code. You'd need to use a different implementation to make it a better test.
Or you'd need to test properties of +1
, like some general patterns you know it should follow and assert those.
🙂 It’s a test for inc
. Not very useful to rewrite in the first place. I stripped all the real logic out.
Thanks for all the help. It all makes sense now.
… and if you are confused by the fact that (= (inc ##Inf) ##Inf)
then don’t read about Hilbert’s Hotel.
Oh, I wasn't confused about it -- I'm a mathematician by "training" (back at university, at least) -- but when I saw the ##NaN
thing, my brain went "Oh, there's another weird number in Clojure, ##Inf
", and so I started to type and that was when my mathematician kicked in and corrected what I was typing somewhat :)
But I did have to convince my brain that (= ##Inf ##Inf) ;=> true
whereas I knew that (= ##NaN ##NaN) ;=> false
🙂
Okay, didn't totally get what you're testing. But I just mean that (= (+ 1 x) (+ 1 x))
is not a very good way to test +
, because if it has a bug the bug will exist on both side.
@U0K064KQV 🙂 agreed, not a very meaningful test at all. That’s why I knew the problem had to be somewhere else.
if I remember correctly, NaN
not equal to Nan
is explicitly specified in IEEE 754. If so, Clojure / Java are just following the official spec for floating point numbers.