This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-01-29
Channels
- # babashka (4)
- # babashka-sci-dev (96)
- # beginners (79)
- # calva (26)
- # cider (5)
- # clerk (2)
- # clj-kondo (23)
- # clojars (14)
- # clojure (54)
- # clojure-europe (8)
- # clojure-sweden (3)
- # clojurescript (76)
- # datomic (12)
- # deps-new (6)
- # emacs (20)
- # events (3)
- # exercism (1)
- # fulcro (11)
- # funcool (12)
- # hugsql (14)
- # hyperfiddle (6)
- # kaocha (1)
- # lambdaisland (1)
- # lsp (22)
- # malli (1)
- # matcher-combinators (6)
- # nbb (6)
- # off-topic (128)
- # polylith (12)
- # re-frame (4)
- # reagent (1)
- # releases (4)
- # shadow-cljs (8)
- # tools-build (13)
- # tools-deps (13)
- # tree-sitter (5)
How can I write blow code in Clojure :
boolean b1 = true;
boolean b2 = false;
boolean b3 = true;
b1 & b2 | b2 & b3 | b2
bitwise booleanhttps://clojure.org/api/cheatsheet Look at the section called Bitwise
(let [b1 true b2 false b3 true] (bit-or (bit-and b1 b2) (bit-and b2 b3) b2))
and
eq java &
? @U04V4KLKC
i edited my snippet. and equivalent to java &&
yes, you can pass boolean to bitwise functions
(bit-and true false)
Execution error (IllegalArgumentException) at user/eval7 (REPL:1).
bit operation not supported for: class java.lang.Boolean
Question, I am using a js/setTimeout to show error message with a 1 second delay, and reset it as long as I'm typing (so while I'm typing to not show the error, and only after last letter typed, start timer of 1 second to show message . This is the code:
(let [timeout-id (js/setTimeout (show-email-invalid! new-session button-disabled? email-valid?) 1000)]
Here I initiate it, and I reset everyime I type
:on-change (fn [x]
(js/clearTimeout timeout-id))
ANy other ideas how can use delay method in clojurescript WITHOUT js/setTimeout?Various options exist for ClojureScript. You can use https://google.github.io/closure-library/api/goog.async.Delay.html or timeouts in core.async as follows:
(ns example
(:require [cljs.core.async :refer [timeout <!] :refer-macros [go]]))
;; Waits for 1 second.
(go (<! (timeout 1)))
The pattern you’re describing has a name: debounce (in comparison to throttle). If you’d like to use something built-in you could just call https://google.github.io/closure-library/api/goog.functions.html#debounce
Here’s a post explaining debounce vs throttle: https://css-tricks.com/debouncing-throttling-explained-examples/
Hey, I used the built-in debounce like this, it works but how do I make it reset just like I do clearTimeout for timeout-id:
(ns app.strive-forms.fields-widgets.email-input-field
(:import [goog.async Debouncer]))
(defn debounce [f interval]
(let [dbnc (Debouncer. f interval)]
;; We use apply here to support functions of various arities
(fn [& args] (.apply (.-fire dbnc) dbnc (to-array args)))))
(let [timeout-id (debounce (show-email-invalid! new-session button-disabled? email-valid?) 1000)]
Sorry, I just re-read the OT and realized I misunderstood the question. If you only want exactly 1 second after last character than you probably want to use Delay as @U0479UCF48H points out (since every time you call Delay#start
it will reset the timer).
IS there an example of how to use it? Like this link shows how to use Debouncer https://martinklepsch.org/posts/simple-debouncing-in-clojurescript.html
;; Delay the function `f` you're interested in
;; Returns a function that will reset the timer
(defn make-delayer [f interval]
(let [delayer (Delay. f interval)]
(fn [& args] (.start delayer))))
;; Save the reset function
(let [reset-delay (make-delayer show-error 1000)]
;; reset the timer
{:on-change (reset-delay)})
;; if reset-delay doesn't get called, it will
;; call `show-error` after 1 second
^ I took a stab at it, but I'm not near a REPL at the moment, so I'm not sure if that will work . 🤞
The f
is a non-arg function, so you need to wrap your code: (fn [] (show-email-invalid! new-session button-disabled? email-valid?))
It is internally with an fn:
(defn show-email-invalid! [new-session button-disabled? email-valid?]
(fn []
(when (and (not @new-session) @button-disabled?)
(reset! email-valid? false))))
Make sure you're creating just one reset-delay
function and e.g. not putting the logic inside the render loop or something (that would create new Delays with every key press, etc).
(defn make-delayer [f interval]
(let [delayer (Delay. f interval)]
(fn [& args] (.start delayer))))
(let [timeout-id (make-delayer (show-email-invalid! new-session button-disabled? email-valid?) 1000)]
This is how I create it..Sorry, need to run afk for a while; if you don't figure it out in the meantime, feel free to ping me later :)
I have a limited understanding of data structures. Any recommended materials?
I have a site I'm working on. There is personal data. It's difficult for me to visualize the best way that this data should be represented in clojure for readability and management. I'm also unsure of where data representations in clojure/clojurescript begin, and where a separate database begins. I'm somewhat confused as to whether a clojurescript webapp even needs a separate database
I'm unclear as to the difference between '(foo)
and 'foo
. Could someone explain?
'foo
is the symbol foo, '(foo)
is a list containing a single elemnt: the symbol foo
'(foo)
is the same as (list 'foo)
so ‘foo is :foo ?
:foo
is a keyword. Keywords and symbols are similar but different. Some points of interest:
• https://clojure.org/guides/weird_characters#_keyword
• https://clojure.org/guides/weird_characters#_quote
• https://clojure.org/reference/reader#_symbols
• https://clojure.org/reference/reader#_literals (the "Keywords" section)
• https://clojure.org/guides/faq#why_keywords
what's the simplest way to merge a sequence of maps like ({:foo "val1"} {:foo "val2"})
into {:foo ["val1" "val2"]}
? (apply merge-with vector)
works for small sequences but it appearances to be creating a vector for every value because it blows up the stack on large datasets.
Well, even with 3 maps (with overlapping keys) the call to vector will make nested vectors, which I assume is not desired. Assuming non-vector values, a fn that conj's if the first arg is a vector and otherwise calls vector seems to be decent for 1000 maps (not sure what scale 'large' is)
yeah in most cases its gonna be in that range @U013JFLRFS8. previously I had (apply merge-with (fn [& v] (into [] (flatten (conj [] v)))))
which worked but felt a little slow (though there's other stuff happening in the function so it may not be this particular piece of code.)
Unless I'm screwing something up (reduce merge-with vector)
behaves like (apply merge '({:foo "val1"} {:foo "val2"}))
@U05476190
(apply merge-with (fn [fst snd] (cond
(vector? fst) (conj fst snd)
:else (vector fst snd)))
(map #(assoc {} :foo %) (range 10)))
=> {:foo [0 1 2 3 4 5 6 7 8 9]}
nice@U03QBKTVA0N There might be a better transducer solution, but I took a stab at it out of curiosity:
(comment
(require '[criterium.core :as criterium])
(require '[net.cgrand.xforms :as x])
(def coll
(map #(hash-map :a % :b % :c %) (range 1000000)))
(defn merge1 [coll]
(apply merge-with
(fn [fst snd] (if (vector? fst) (conj fst snd) [fst snd]))
coll))
(defn merge2 [coll]
(into {}
(comp
(mapcat identity)
(x/by-key key (comp (map val) (x/into []))))
coll))
(= (merge1 coll) (merge2 coll))
(criterium/quick-bench (merge1 coll))
;; Execution time mean : 375.307276 ms
;; Execution time std-deviation : 28.760415 ms
(criterium/quick-bench (merge2 coll))
;; Execution time mean : 138.256720 ms
;; Execution time std-deviation : 6.537487 ms
#__)
With reduce it be like this:
(reduce
(fn[m1 m2] (merge-with vector m1 m2))
t)
FYI
I'm not too sure what blows up the stack, it could maybe be apply ? So maybe you should try to see if reduce does the same or not?Actually, I think it's merge-with that blows up the stack when using apply. It looks like merge-with will use recursion to merge all the maps provided into each other. apply
is like calling merge-with with a large number of arguments, which I think then get recursively merged by merge-with, and so if there are too many, merge-with will stackoverflow.
If that's correct, then the reduce version should work, because it never calls merge-with with more than two arguments, and reduce is not recursive, so the iteration over the sequence should not stackoverflow.
merge-with
internally calls reduce1
so there shouldn't be a big difference. I initially read "blow the call stack" and I also assumed the problem was with an initial big collection of inputs. Given what was later posted in the thread, I think the actual performance issue was building up nested data only to then go and flatten
all of them.
(apply merge-with conj {:foo []}
(repeatedly 10000000 #(hash-map :foo (rand-int 10))))
this worked for me. conj
avoids the nesting of vector
(and just feels like what we want to do) and the literal map is a "patch value" like one would in similar circumstances use fnil
to provide.reduce1
is the function that is recursive, at least if I'm not misreading the core implementation. It says it gets redefined later, but I don't see that, I simply see reduce being defined later, so reduce1 seems to stay recursive the whole time.
Also where do you see flattening happening?
(apply merge-with vector '({:foo "val1"} {:foo "val2"}))
My understanding was that this was just pseudo-code, because it wouldn't work for a longer sequence:
(apply merge-with vector '({:foo "val1"} {:foo "val2"} {:foo "val3"}))
;; => {:foo [["val1" "val2"] "val3"]}
The flatten
was mentioned earlier in the thread:
(apply merge-with (fn [& v] (into [] (flatten (conj [] v)))))
My interpretation was that vector
was recommended as a merge function (which is why you would want to flatten):
(reduce
(fn [m1 m2] (merge-with vector m1 m2))
'({:foo "val1"} {:foo "val2"} {:foo "val3"} {:foo "val4"}))
;; => {:foo [[["val1" "val2"] "val3"] "val4"]}
For context, as @U05476190 says, I replaced the (working) flatten
solution with (apply merge-with vector)
, but I did fail to specify that it only works on two maps before the nested vectors start to interfere.
I read somewhere awhile back that flatten
is not a good solution and should be avoided.
> I read somewhere awhile back that flatten
is not a good solution and should be avoided.
I am of this opinion, yes. It's like the joke about regexes: now you've probably got 2 problems ;)
@U03QBKTVA0N What code gave you a stackoverflow?
Using apply merge-with vector
on about 1500 maps did, I have not had a chance to check out the reduce
version though.
Each map in my data had 90 k-v pairs though, I think that might be the source but idk.
that's presumably creating a LOT of nesting
flatten
uses tree-seq
and I think that may quickly explode the amount of thunks in memory
yeah but the flatten
code wasn't blowing up, it was just slow
and it felt a little too "code smelly" to me
user=> (set! *print-level* 10)
10
user=> (apply merge-with vector (repeatedly 10000000 #(hash-map :foo (rand-int 10))))
{:foo [[[[[[[[[# 8] 3] 1] 8] 7] 8] 6] 4] 7]}
If the print-level is unbounded, and you try to print a very deeply nested data-structure, it will stackoverflow
I know we've gone off the deep-end at this point, but if you're doing a lot of operations that sound columnar, you may be interested in looking at https://github.com/scicloj/tablecloth
such an interesting deep end tho 😆
As for your original problem, this solves it:
user=> (reduce
#_=> (fn [m1 m2] (merge-with #(if (vector? %1) (conj %1 %2) (vector %1 %2)) m1 m2))
#_=> '({:foo "val1"} {:foo "val2"} {:foo "val3"}))
{:foo ["val1" "val2" "val3"]}
Or also works with apply:
user=> (apply
#_=> merge-with #(if (vector? %1) (conj %1 %2) (vector %1 %2))
#_=> '({:foo "val1"} {:foo "val2"} {:foo "val3"}))
{:foo ["val1" "val2" "val3"]}
yeah I ended up with almost exactly that yesterday
(apply merge-with
(fn [fst snd] (if (vector? fst)
(conj fst snd)
(vector fst snd))))
When I benchmarked it on my machine, the apply merge-with
was slightly faster than the reduce
version (I suspect because the apply version internally calls directly to reduce1
). But I did get it running faster with transducers:
https://clojurians.slack.com/archives/C053AK3F9/p1675015405119949?thread_ts=1675009790.585169&cid=C053AK3F9
Ya, the reduce is redundant here, because merge-with already reduces. I think the reduce would do multiple merge-with calls in a loop, where as apply would have merge-with merge all of them in one loop. It's probably minimal either way
I guess reduce1 does probably get replaced by a real reduce at some point when loaded, though I could not find where
> I think the reduce would do multiple merge-with calls in a loop, where as apply would have merge-with merge all of them in one loop. This is how I explained it to myself. But it was just a REPL micro-benchmark. I wonder if a warmed up server would JIT it away. But then the real performance always depends on your data anyway, so 🤷
At only 150ms faster at 1 million N isn't that big either. It depends what you're going for, but millisecond performance improvements at large N are normally not worth a lot of time investiment.
> At only 150ms faster at 1 million N isn't that big either. Sure, but I don't think that's a fair assessment. Those numbers show a 2.7x speedup, with the exact same algorithm (we're just conj'ing onto a collection). So I would argue it's not CPU bound and the difference in speed can only be explained by a lot more work being done allocating memory and GC. I'm sure if you profile it from a memory-point-of-view it will show a different story.
Whether that difference is important to you, depends on how often you're doing it and what other potential resources you are starving. It may or may not be a concern for your specific application.
It doesn't really matter if its allocation based, the impact is still 150ms to alloc/reclaim memory, and only for N = 1 million. But ya, it depends what you're doing, but most people aren't doing high performance with Clojure anyways. Maybe if you were sensitive to GC pauses, but again, I don't like advice that considers outlier use-cases. For most use-cases it doesn't seem to matter. I think if you want better performance, I'd rethink why I'm even having to do this merging of maps in the first place. Better data model and data-structures might save a lot more time.
I see a lot of people waste their time benchmarking between apply and reduce, and it's not a good use of time in my opinion. If it's for fun and experimentation it's fine, but if you're implementing an app, pick whichever and move on. Then when you have a full app, if you're not happy with the performance of certain things, profile and optimize as needed, maybe you do switch something from reduce to apply at that point, but only because you saw it in the flame graph 😛