This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-11-29
Channels
- # adventofcode (9)
- # announcements (2)
- # aws (78)
- # babashka (55)
- # beginners (97)
- # biff (9)
- # calva (11)
- # cherry (2)
- # cider (8)
- # clerk (7)
- # clj-kondo (6)
- # clj-on-windows (4)
- # clojure (213)
- # clojure-austin (6)
- # clojure-europe (63)
- # clojure-nl (1)
- # clojure-norway (5)
- # clojure-spec (10)
- # clojure-uk (1)
- # clojurescript (14)
- # clr (2)
- # community-development (3)
- # conjure (14)
- # datomic (2)
- # deps-new (5)
- # dev-tooling (10)
- # editors (3)
- # emacs (3)
- # etaoin (19)
- # events (4)
- # fulcro (71)
- # holy-lambda (20)
- # java (3)
- # jobs (2)
- # leiningen (4)
- # lsp (24)
- # malli (15)
- # membrane (107)
- # music (1)
- # off-topic (29)
- # pedestal (4)
- # polylith (1)
- # portal (2)
- # rdf (5)
- # releases (7)
- # scittle (5)
- # shadow-cljs (8)
- # tools-build (15)
- # tools-deps (6)
- # xtdb (13)
why does cons
need to accept an Iseqable as its second argument
why can’t it accept a number like in other lisps
fyi: The require in the test namespace is written for a cljc file. If only using in ClojureScript, you can pull [:include-macros true])
out of the https://clojure.org/guides/reader_conditionals; or for only Clojure just remove it.
Clojure does not have the notion of "improper lists" like other Lisps do
I would recommend watching the video, or reading the transcript, of "Clojure for Lisp Programmers" where Rich Hickey describes some of the differences he chose between Clojure, Common Lisp, and Scheme when designing Clojure. He won't necessarily explain everything you might want to know in depth, but there are some interesting nuggets in there: https://github.com/matthiasn/talk-transcripts/blob/master/Hickey_Rich/ClojureIntroForLispProgrammers.md
The sequence abstraction is pretty fundamental to Clojure, and improper lists do not fit into that abstraction.
Personally, they seem to me like a weird curiosity of other Lisps that they allow such "lists".
This is not specific to clojure (or even lisp - it's focused on erlang/elixir, but talks of a lot of lisp history) - https://dorgan.netlify.app/posts/2021/03/making-sense-of-elixir-(improper)-lists/ - really neat read
can i extend a class in clojure
Yes. Maybe this chart is useful to you: https://cemerick.com/blog/2011/07/05/flowchart-for-choosing-the-right-clojure-type-definition-form.html
if i have to directly iterate over a vector using a loop
block, should i use indexes or first
/`rest`-esque forms?
user=> (let [v (vec (range 50000))
i-max (count v)]
(time
(dotimes [_ 500]
(loop [i 0
sum 0]
(if (< i i-max)
(recur (inc i) (+ sum (nth v i)))
sum)))))
"Elapsed time: 307.461583 msecs"
nil
user=> (let [v (vec (range 50000))]
(time
(dotimes [_ 500]
(loop [[f & r] v
sum 0]
(if r
(recur r (+ sum f))
sum)))))
"Elapsed time: 429.898333 msecs"
nil
(def v (vec (range 1000000)))
(def l (count v))
(time
(dotimes [_ 1000]
(loop [i 0]
(when (< i l)
(get v i)
(recur (inc i))))))
;; => 13254ms
(time
(dotimes [_ 1000]
(loop [[x & xs] v]
(when x
(recur xs)))))
;; => 16720ms
s/when/if
(in the index-based example)
oh, n/m, you don’t use the return in any case. toy examples… 😄
I’d argue though that speed is not everything, and first/rest are probably easier to follow. And if possible, prefer reduce & friends over loop
yeah 🙂 just trying toe valuate the performance of access via get and first/rest
i created my own benchmark
i should use indexes
there is reduce function in clojure core
Also, replace subvec
with next
in your example, and the difference will be marginal
@U04V4KLKC i am aware of that
did you find it not good in your case?
no i am just more familiar with loops
ok, but its signature is very close to your reduce1. you can just clojure.core/reduce instead of custom
anyway I would recommend learn reduce because from my expirience it is in the top 10 function I use the most
@U922FGW59 if so then the second version is faster but not by much
ofc the builtin reduce is faster than any one of them lol
I think “faster” is a weird criterion here.
• The built-in reduce
is even faster, because it uses an internal optimisation
• If you really really need max speed (hint: you don’t), go with arrays
• It’s much more important to write simple code than fastest
setting speed aside, the non indexed version is better because its more explicit?
It’s a bit subjective. I’d argue that using the “seq” abstraction is better because it works with all kind of sequences, not just vectors
(seq = first/rest)
most of the time you will work with sequences instead of vectors. like lazy-seq, result of map, filter etc. And construction preemptively vectors from them just to satisfy how the function access elements doesn't worth it
Since this is #C053AK3F9: If you are new to Clojure, you might be drawn to loop
/`recur`. You will see that most of the times there are better alternatives. I recommend to avoid loop
/`recur` where possible, and prefer map
, reduce
, into
, etc.
is it too low level to be used
When you write your own version of reduce: maybe. But why would you, when reduce already exists?
Depending on what you're trying to do:
• accumulate: reduce
• side effects: doseq
• return a new sequence: map, filter, etc
• list comprehension / nested sequences: for
• granular control: loop/recur, first/rest
Out of all of these, reduce has the best performance (if you're going to "consume" the entire vector).
Out of all your versions of reduce
, it's worth trying one with first/rest. Also, to get a better feeling for timing I often execute that block in the context of (dotimes [_ a-lot] your-code)
to get cleaner results.
If you're going to implement reduce yourself you can also use an iterator which should be faster than indexed access.
If you're doing indexed access, using nth
instead of invoking a vector might actually be faster because the index won't escape to heap if the compiler can help it
Traversing a vector seq is actually very fast as it’s literally walking over the internal tree (but does have some overhead from seq stuff). Indexed access requires re-traversing the tree for every index. For larger vectors with deeper trees, this may be more pronounced. The tradeoffs between the two approaches are subtle. Also, fyi 1.12 has some enhancements to reduce on vector seqs that may make certain access patterns faster
> 1.12 has some enhancements to reduce on vector seqs that may make certain access patterns faster Is it in master branch yet?
It’s in alpha1
I need to produce the following HTML from hiccup
<a href="/register" data-analytics='"Register", {"props":{"plan":"Navigation","location":"footer"}}'> ... </a>
notice the single '
in data-analytics
. This is a requirement from the analytics product I'm trying to integrate. Any idea how to get that instead of the normal output which is ""Register", ...
?Are you sure this is really required? Is your data analytics product using some weird limited HTML parser? I assume your analytics product will read the DOM as parsed by the browser, hiccup’s output like…
"<a data-analytics=\""Register"\"></a>"
…should be fine?the product is http://plausible.io and their docs are very explicit about the quotes
> Do watch the quotes! The data-analytics
tag value should have both single and double quotes as shown above.
so I assume they have some weird limited HTML parser 😄
Maybe they just require double quotes in attribute values and recommend to use single quotes so you don’t have to escape the double quotes. When generating HTML with hiccup, this should not be a problem. Have you tried it?
I was trying it out now and it works. It didn't before because of a typo (I forgot to escape some inner quotes. 😄
should i use (dict
:key)
or (:key dict)
?
or it is up to personal preference?
the second option is nil safe, the first will throw a NullPointerException if the dict is nil
When it comes to tight loops and JIT compilation it might matter, but I haven't seen results which indicate which is better yet
I prefer (:key dict)
as (dict :key)
makes me think to realise that dict
is a map and not defined via defn
.
I think that I recall someone demonstrating that calling a keyword was faster than calling a map, but I can't remember where I saw that.
I guess that the jit gets to inline it more often since it's a static object rather than a different one each time you call it??
But there is some subtlety regarding the keyword lookup site which I don't understand yet
See also https://guide.clojure.style/ for various suggestions about idiomatic Clojure.
Is there a way to check if a :dependencies
entry in a leinigen project.clj
is required somewhere in your source code?
sure, use grep
? but programmatically in clojure, or with a tool - dunno 🙂 - I guess another tricky thing is sometimes the package name you have to require/import may not match the name listed in deps exactly
Yeah, and I think a typical use-case is that you want to know if you have any unused deps across the whole project. You don’t necessarily have a single candidate that you want to check on.
Just checked clj-kondo
eastwood
and clojure-lsp
and none of them do this. I guess it’s not surprising. I don’t think that it’s something that can be done with static analysis, because of the issue @U96DD8U80 mentioned. I guess a tool would have to look at the namespaces and/or java packages included in the dep jar and check that against the ns
forms.
Hey team, say I have a machine with 8 CPUs and 32GB of memory.
Are there heuristics for what are good jvm options to set for this? i.e I would guess Xmx and Xms
could be optimized, etc
https://blog.gceasy.io/2020/11/05/best-practices-java-memory-arguments-for-containers/ https://learn.microsoft.com/en-us/azure/developer/java/containers/overview https://practical.li/clojure/reference/clojure-cli/jvm-options.html https://developers.redhat.com/articles/2022/04/19/java-17-whats-new-openjdks-container-awareness#
Some of these are specifically for containers so they might not apply directly to your situation
But nonetheless it's quite interesting how conservative the JVM is by default
For instance the default maximum heap size is 25% of the total memory, which is ridiculously low, especially if the only thing running on the machine is a JVM application
Besides more being generally better, do you have any constraints? A large working set?
Great question Ben! The main use case I have in mind, is to support a lot of websocket connections. I am not 100% sure what the constraints will be yet. I would guess we’d be io-bound, and need to use a bunch of ram for caching.
First thing to do is measure. Profile heap usage with a logarithmically increasing number of open connections to get a sense of your memory budget
Thanks Ben! Are there any resources you’d recommend, for me to read up on profiling the jvm? This will be my first time doing it.
Clojure Goes Fast is an excellent initial resource Few other tools to learn: Visual VM Java mission control (jmc) Java Flight Recorder (JFR) Eclipse memory analyzer (MAT) Also, Gil Tene, Azul's CTO has excellent talks on the subject, although they're more about CPU and speed, they should give you a fine idea on what proper measurements look like On top of that, connect your service to some metrics back end and monitor its resource usage once you let it out to the wild
Welcome. Feel free to ask if you get tripped up on any subject here, I've invested way too much effort in this 😄
does anyone knows why it gives this error: ClassCastException: clojure.lang.Symbol cannot be cast to clojure.lang.IPersistentCollection (NO_SOURCE_PATH:11:1)
it says that it happened in reduce
at line 2 but i don't know why
(conj acc (conj (first acc) curr)
When acc
is a vector of symbols, (first acc)
will be a symbol, and does not support conj
.yes i found it
yay optional/default arguments
btw do you have any suggestions for my code formatting
ik that but there is no optional positional arguments
somehow clojure has function overloading but doesn’t implement optional args
yeah thats im doing with defn-with-opt-args
but implementing optional/default arguments with multi arity is a bit clunky
i don’t want boilerplate so i made the macro