This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-09-12
Channels
- # announcements (3)
- # babashka (6)
- # beginners (84)
- # biff (1)
- # cider (7)
- # cljsrn (1)
- # clojure (18)
- # clojure-australia (3)
- # clojure-dev (21)
- # clojure-france (1)
- # clojure-spec (6)
- # clojurescript (78)
- # datomic (34)
- # emacs (5)
- # exercism (32)
- # graalvm (1)
- # helix (2)
- # hyperfiddle (3)
- # lsp (36)
- # malli (4)
- # missionary (3)
- # off-topic (54)
- # re-frame (14)
- # releases (2)
- # sql (31)
- # vim (9)
When I was using hiccup, I found I cannot use it inside anonymous function. when this works:
(defn display-todo []
(let [todos (re-frame/subscribe [::subs/mock-data])]
(log (clj->js @todos))
(map (fn [x] [:span (str (:block/string x))]) @todos)))
but this did not work:
(defn display-todo []
(let [todos (re-frame/subscribe [::subs/mock-data])]
(log (clj->js @todos))
(map #([:span (str (:block/string %))]) @todos)))
How can I explain this?Which is why the stacktrace you got said something about calling a function with the wrong number of arguments, because you are invoking a vector as a function (which you can do) with the wrong number of arguments
</lurk>
Just curious, is it actually possible to write it such that it is equivalent to (fn [] [...])
? Or is it always necessary to use the longhand notation in that case?
This made me curious, so I looked it up.
Yes, using (fn [] [1 2 3])
is better than either of:
• (fn [] (vector 1 2 3))
• #(vector 1 2 3)
However, there’s no practical difference to:
• #(do [1 2 3])
It’s purely about what you might think is the better idiom.
Personally, I think that do
implies a sequence of operations to perform, typically with a side-effect (such as printing or logging), so I wouldn’t use it here.
The reason why it’s “better” is that calling vector
loads 2 strings, with contents: “clojure.core” and “vector”, then calls the function clojure.lang.RT.var
to get the vector
function. Then it loads the remaining args over it in the stack and calls it.
Using the literal syntax […]
is more direct. It makes a direct call to either clojure.lang.Tuple/create
(for vectors of 6 or fewer) or clojure.lang.RT/vector
(which is what the Tuple/create method calls)
;; how do I achieve this
(defn do-stuff []
;; every 30ms call this
;; if it throws an exception, cancel all this
(ProgressManager/checkCanceled)
;; do something presumably expensive
(go ...? )
)
If 'all this' has side effects, you'll need some way of signalling that you want to undo those side effects (or ideally capture them all and only process them at the end)
Yea that is fine. Basically what I would like to have is maybe a channel like timeout
but for this checkCanceled
thing. But checkCanceled
needs to be on the same thread. I don't have experience yet with sticking together those channels
One common idiom is to use a channel (called a "shut-down channel" or "control channel") to signal the go-loop to stop. This requires the go-loop to periodically check for values put on this channel.
Yea what i was stuck with was that this checkCanceled function only threw an exception on the initial thread so naively putting it into go didn't work. For now iade it work with future
I'm not sure if I'm just missing it but none of the example sente apps seem to touch on websocket subscriptions. I'd like ws clients to be able to send i.e. {:subscribe {:article 14}}
and get all future updates for that key until they unsubscribe. What's the best way to go about doing that?
I was going the future
function and can we use this for all the function call which results are independent to each other?
I was running through a little intro tutorial as a morning warmup but encountered a snag in solving one of the exercises
;; Write a zero-argument function that returns the `identity` function
(defn gen-identity
[] ; zero arguments
identity)
;; EXERCISE
;; Fix this function so that it returns a function that _behaves_
;; like the identity function (don't return `same`, or `identity`).
(defn gen-identity-v2
[]
(fn [x] x))
;; EXERCISE
;; Replace 'FIX1 with a call to the `gen-identity` function,
;; and 'FIX2 with a call to the `gen-identity-v2` function,
;; such that the following evaluates to true.
(= identity
(gen-identity)
(gen-identity-v2)) ;; false
My gen-identity-v2
functions makes it false. What would you use there to make the last expression evaluate to true?
I would expect that to return false so I'm confused
The exercise wants it to return true so I was asking how to change the gen-identity-v2
function to make that happen
I just added that comment to show what I was getting, not what the exercise called for. My bad
Right, my apologies, I expect that the equality the exercise is performing to return false, because (as I remember) anonymous functions aren’t equal to other functions , so I’m also not sure how to write the correct answer
Right? Haha. I figured it was to teach the concept of function equality but then it never supplies an answer or explanation.
Oh wait, I did find the solutions branch. They had the same gen-identity-v2
function as I did but they called it with the identity
function:
(= identity
(gen-identity)
((gen-identity-v2) identity))
Hm yeah, that works but is not what I expected from the text. Glad you have a solution!
In context they were talking about higher order functions and were doing similar things to that so that's on me a bit.
Is there a performance difference between these 3 approaches to reach into a nested map? Any other pros or cons or would you consider it entirely subjective?
(def foo {:a 1 :b {:c 2 :d 3}})
(get-in foo [:b :d]) ;; 3
((comp :d :b) foo) ;; 3
(-> foo :b :d) ;; 3
Criterium
There's a :bench
alias in my dot-clojure, in case you're using that.
As for pros/cons, I tend to use the third approach most of the time but I think for most people the first approach is more "obvious" and readable. The second is way too cryptic.
(! 647)-> clj -A:bench
user=> (require '[criterium.core :refer [bench]])
nil
user=> (def foo {:a 1 :b {:c 2 :d 3}})
#'user/foo
user=> (bench (get-in foo [:b :d]))
Evaluation count : 1155680820 in 60 samples of 19261347 calls.
Execution time mean : 44.709843 ns
...
user=> (bench ((comp :d :b) foo))
Evaluation count : 2855798280 in 60 samples of 47596638 calls.
Execution time mean : 13.269613 ns
...
user=> (bench (-> foo :b :d))
Evaluation count : 2365929660 in 60 samples of 39432161 calls.
Execution time mean : 17.841840 ns
...
user=>
I'd be interested in knowing why that's the case. The fastest seems to be (.valAt (.valAt foo :b) :d)
, but now we're getting a little silly
get-in
is the slowest one? That’s wild. It feels the most idiomatic to me! I only use threading macros when adding in other functions like first
get-in
is slow because it loops acroos the keys, each time checking for a missing key.
EDIT: Actually it is because it uses reduce1
, a slower version of reduce
.
We're talking a "few" nanoseconds here -- this is always the issue with being concerned about low-level performance: it's not worth looking at unless you have a demonstrable performance problem and you've profiled your code and identified a bottleneck.
Also, performance of low-level stuff can change from (Clojure) version to version so optimizing for it "in advance" is pretty never worthwhile.
I would guess that the native compilation of (comp :d :b)
and executing that on the object may be more efficient than when it compiles calling :b
then :d
in order. But I haven’t ever really looked up how to access the native instructions that get generated, and without access to that I can only speculate
note: if someone knows how to dump compiled code (and I know it can be done), then I’d love to hear how. This could save me hours of Googling 🙂
Do you mean like this? https://github.com/clojure-goes-fast/clj-java-decompiler
I run javap regularly, so I’m good there. But I don’t know how to get the native instructions from the JIT once the compilation threshold has been met
I tried the following with success: https://www.morling.dev/blog/building-hsdis-for-openjdk-15/
@U051N6TTC you can use JITWatch as well, I used it a few times
@U024X3V2YN4 if you look at the implementations you'll find the fastest path without dispatching directly on .valAt is invoking the map directly on the key, which calls valAt with no dynamic dispatch
(:a foo)
is consistently slightly faster on my benchmarks.
EDIT: no, it's not. I was looking at the wrong table
damn, I don't have the numbers checking the difference between calling the keyword and calling the map. Need to run those with JMH, too
https://github.com/bsless/clj-fast/blob/master/extra/clj-fast.analysis/doc/results.md#throughput-4
The keyword invoke gives a longer path. https://github.com/clojure/clojure/blob/b8132f92f3c3862aa6cdd8a72e4e74802a63f673/src/jvm/clojure/lang/Keyword.java#L144
invoke
map: https://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/APersistentMap.java#L291
The remaining question is how it gets JIT compiled and if it's a megamorphic call site, but I couldn't figure it out just by looking at it
regardless, the biggest performance impact here is the interation by way of reduce1
in get-in
It's defined before reduce
because reduce
can only be defined after you load protocols
Hey, for a while now, Emacs opens the Cider-result buffer at the bottom like this | | | | | _______________ | | C-c C-p result | I would really like to get my old default back: open the buffer on the right side | | C-c C-p | | | results | | | here | But I can't figure out how to do it. I'm not even find the right keywords to search for this problem. Any help is appreciated 😄
Emacs has a few functions and settings that relate to window placement. This might help : https://stackoverflow.com/questions/7997590/how-to-change-the-default-split-screen-direction ... If you resize the window to short and wide or real and thin, does it change the behaviour? If so, you can just change the preferred split size.
I think C-x 3 is vertical and C-x 2 is horizontal
Placement can also be influenced by how much space Emacs has to create a new window as a new column