Fork me on GitHub
#beginners
<
2023-01-29
>
quan xing03:01:30

How can I write blow code in Clojure :

boolean b1 = true;
boolean b2 = false;
boolean b3 = true;
b1 & b2 | b2 & b3 | b2
bitwise boolean

didibus03:01:55

https://clojure.org/api/cheatsheet Look at the section called Bitwise

delaguardo10:01:29

(let [b1 true b2 false b3 true] (bit-or (bit-and b1 b2) (bit-and b2 b3) b2))

quan xing10:01:17

can I pass the true and false call the bit-and ?

delaguardo10:01:54

i edited my snippet. and equivalent to java &&

delaguardo10:01:30

yes, you can pass boolean to bitwise functions

quan xing10:01:42

(bit-and true false)
Execution error (IllegalArgumentException) at user/eval7 (REPL:1).
bit operation not supported for: class java.lang.Boolean

phill12:01:44

and is like && bit-and is like &

M J14:01:24

Question, I am using a js/setTimeout to show error message with a 1 second delay, and reset it as long as I'm typing (so while I'm typing to not show the error, and only after last letter typed, start timer of 1 second to show message . This is the code:

(let [timeout-id (js/setTimeout (show-email-invalid! new-session button-disabled? email-valid?) 1000)]
Here I initiate it, and I reset everyime I type
:on-change (fn [x]
               (js/clearTimeout timeout-id))
ANy other ideas how can use delay method in clojurescript WITHOUT js/setTimeout?

hifumi12314:01:26

Various options exist for ClojureScript. You can use https://google.github.io/closure-library/api/goog.async.Delay.html or timeouts in core.async as follows:

(ns example
  (:require [cljs.core.async :refer [timeout <!] :refer-macros [go]]))

;; Waits for 1 second.
(go (<! (timeout 1)))

pithyless14:01:54

The pattern you’re describing has a name: debounce (in comparison to throttle). If you’d like to use something built-in you could just call https://google.github.io/closure-library/api/goog.functions.html#debounce

👀 2
M J14:01:25

Hey, I used the built-in debounce like this, it works but how do I make it reset just like I do clearTimeout for timeout-id:

(ns app.strive-forms.fields-widgets.email-input-field
  (:import [goog.async Debouncer]))

(defn debounce [f interval]
  (let [dbnc (Debouncer. f interval)]
    ;; We use apply here to support functions of various arities
    (fn [& args] (.apply (.-fire dbnc) dbnc (to-array args)))))

(let [timeout-id (debounce (show-email-invalid! new-session button-disabled? email-valid?) 1000)]

pithyless14:01:14

Sorry, I just re-read the OT and realized I misunderstood the question. If you only want exactly 1 second after last character than you probably want to use Delay as @U0479UCF48H points out (since every time you call Delay#start it will reset the timer).

M J14:01:05

IS there an example of how to use it? Like this link shows how to use Debouncer https://martinklepsch.org/posts/simple-debouncing-in-clojurescript.html

pithyless15:01:17

;; Delay the function `f` you're interested in
;; Returns a function that will reset the timer
(defn make-delayer [f interval]
  (let [delayer (Delay. f interval)]
    (fn [& args] (.start delayer))))

;; Save the reset function
(let [reset-delay (make-delayer show-error 1000)]
  ;; reset the timer
  {:on-change (reset-delay)})

;; if reset-delay doesn't get called, it will
;; call `show-error` after 1 second

pithyless15:01:08

^ I took a stab at it, but I'm not near a REPL at the moment, so I'm not sure if that will work . 🤞

pithyless15:01:13

The f is a non-arg function, so you need to wrap your code: (fn [] (show-email-invalid! new-session button-disabled? email-valid?))

M J15:01:10

It is internally with an fn:

(defn show-email-invalid! [new-session button-disabled? email-valid?]
  (fn []
    (when (and (not @new-session) @button-disabled?)
      (reset! email-valid? false))))

M J15:01:21

Im trying your example, it doesn;t reset

pithyless15:01:49

Make sure you're creating just one reset-delay function and e.g. not putting the logic inside the render loop or something (that would create new Delays with every key press, etc).

M J15:01:26

(defn make-delayer [f interval]
  (let [delayer (Delay. f interval)]
    (fn [& args] (.start delayer))))
(let [timeout-id (make-delayer (show-email-invalid! new-session button-disabled? email-valid?) 1000)]
This is how I create it..

pithyless15:01:06

Yeah, but is that let being called multiple times when you press keys?

M J15:01:18

Nope, just once

M J15:01:50

Wait, I think it is called yes

pithyless15:01:07

Maybe move it to a def for now to simplify debugging

pithyless15:01:43

Sorry, need to run afk for a while; if you don't figure it out in the meantime, feel free to ping me later :)

M J15:01:48

I had

(let []
  (fn []
    (let [timeout-id]...

M J15:01:15

Moved it outside. Now it works! THANKS alot !!!

Clojuri0an15:01:58

I have a limited understanding of data structures. Any recommended materials?

Guild Navigator16:01:38

This is quite good and easy to understand https://amzn.to/409CHXV

👍 2
Clojuri0an15:01:39

I have a site I'm working on. There is personal data. It's difficult for me to visualize the best way that this data should be represented in clojure for readability and management. I'm also unsure of where data representations in clojure/clojurescript begin, and where a separate database begins. I'm somewhat confused as to whether a clojurescript webapp even needs a separate database

Guild Navigator16:01:05

I'm unclear as to the difference between '(foo) and 'foo. Could someone explain?

rolt16:01:07

'foo is the symbol foo, '(foo) is a list containing a single elemnt: the symbol foo '(foo) is the same as (list 'foo)

Guild Navigator16:01:16

so ‘foo is :foo ?

Ben Lieberman16:01:50

what's the simplest way to merge a sequence of maps like ({:foo "val1"} {:foo "val2"}) into {:foo ["val1" "val2"]} ? (apply merge-with vector) works for small sequences but it appearances to be creating a vector for every value because it blows up the stack on large datasets.

2
pithyless16:01:18

reduce instead of apply?

2
Bob B16:01:47

Well, even with 3 maps (with overlapping keys) the call to vector will make nested vectors, which I assume is not desired. Assuming non-vector values, a fn that conj's if the first arg is a vector and otherwise calls vector seems to be decent for 1000 maps (not sure what scale 'large' is)

Ben Lieberman16:01:55

yeah in most cases its gonna be in that range @U013JFLRFS8. previously I had (apply merge-with (fn [& v] (into [] (flatten (conj [] v))))) which worked but felt a little slow (though there's other stuff happening in the function so it may not be this particular piece of code.)

Ben Lieberman16:01:11

Unless I'm screwing something up (reduce merge-with vector) behaves like (apply merge '({:foo "val1"} {:foo "val2"})) @U05476190

Ben Lieberman16:01:47

(apply merge-with (fn [fst snd] (cond
                                  (vector? fst) (conj fst snd)
                                  :else (vector fst snd))) 
       (map #(assoc {} :foo %) (range 10)))
=> {:foo [0 1 2 3 4 5 6 7 8 9]}
nice

pithyless18:01:25

@U03QBKTVA0N There might be a better transducer solution, but I took a stab at it out of curiosity:

(comment

  (require '[criterium.core :as criterium])
  (require '[net.cgrand.xforms :as x])

  (def coll
    (map #(hash-map :a % :b % :c %) (range 1000000)))

  (defn merge1 [coll]
    (apply merge-with
           (fn [fst snd] (if (vector? fst) (conj fst snd) [fst snd]))
           coll))

  (defn merge2 [coll]
    (into {}
          (comp
           (mapcat identity)
           (x/by-key key (comp (map val) (x/into []))))
          coll))

  (= (merge1 coll) (merge2 coll))

  (criterium/quick-bench (merge1 coll))
  ;;            Execution time mean : 375.307276 ms
  ;;   Execution time std-deviation : 28.760415 ms

  (criterium/quick-bench (merge2 coll))
  ;;            Execution time mean : 138.256720 ms
  ;;   Execution time std-deviation : 6.537487 ms

  #__)

👀 2
didibus05:01:24

With reduce it be like this:

(reduce
 (fn[m1 m2] (merge-with vector m1 m2))
 t)
FYI I'm not too sure what blows up the stack, it could maybe be apply ? So maybe you should try to see if reduce does the same or not?

didibus05:01:38

Actually, I think it's merge-with that blows up the stack when using apply. It looks like merge-with will use recursion to merge all the maps provided into each other. apply is like calling merge-with with a large number of arguments, which I think then get recursively merged by merge-with, and so if there are too many, merge-with will stackoverflow. If that's correct, then the reduce version should work, because it never calls merge-with with more than two arguments, and reduce is not recursive, so the iteration over the sequence should not stackoverflow.

pithyless07:01:17

merge-with internally calls reduce1 so there shouldn't be a big difference. I initially read "blow the call stack" and I also assumed the problem was with an initial big collection of inputs. Given what was later posted in the thread, I think the actual performance issue was building up nested data only to then go and flatten all of them.

daveliepmann12:01:00

(apply merge-with conj {:foo []}
         (repeatedly 10000000 #(hash-map :foo (rand-int 10))))
this worked for me. conj avoids the nesting of vector (and just feels like what we want to do) and the literal map is a "patch value" like one would in similar circumstances use fnil to provide.

didibus16:01:57

reduce1 is the function that is recursive, at least if I'm not misreading the core implementation. It says it gets redefined later, but I don't see that, I simply see reduce being defined later, so reduce1 seems to stay recursive the whole time. Also where do you see flattening happening?

(apply merge-with vector '({:foo "val1"} {:foo "val2"}))

pithyless17:01:19

My understanding was that this was just pseudo-code, because it wouldn't work for a longer sequence:

(apply merge-with vector '({:foo "val1"} {:foo "val2"} {:foo "val3"}))
;; => {:foo [["val1" "val2"] "val3"]}
The flatten was mentioned earlier in the thread:
(apply merge-with (fn [& v] (into [] (flatten (conj [] v)))))
My interpretation was that vector was recommended as a merge function (which is why you would want to flatten):
(reduce
 (fn [m1 m2] (merge-with vector m1 m2))
 '({:foo "val1"} {:foo "val2"} {:foo "val3"} {:foo "val4"}))
;; => {:foo [[["val1" "val2"] "val3"] "val4"]}

Ben Lieberman17:01:37

For context, as @U05476190 says, I replaced the (working) flatten solution with (apply merge-with vector), but I did fail to specify that it only works on two maps before the nested vectors start to interfere.

Ben Lieberman17:01:18

I read somewhere awhile back that flatten is not a good solution and should be avoided.

pithyless17:01:06

> I read somewhere awhile back that flatten is not a good solution and should be avoided. I am of this opinion, yes. It's like the joke about regexes: now you've probably got 2 problems ;)

didibus17:01:03

@U03QBKTVA0N What code gave you a stackoverflow?

Ben Lieberman17:01:22

Using apply merge-with vector on about 1500 maps did, I have not had a chance to check out the reduce version though.

didibus17:01:45

When I try it, I think it's just the REPL printing that actually stackoverflow

pithyless17:01:02

Did you test with or without flatten postprocessing?

Ben Lieberman17:01:04

Each map in my data had 90 k-v pairs though, I think that might be the source but idk.

Ben Lieberman17:01:38

that's presumably creating a LOT of nesting

pithyless17:01:54

flatten uses tree-seq and I think that may quickly explode the amount of thunks in memory

Ben Lieberman17:01:48

yeah but the flatten code wasn't blowing up, it was just slow

Ben Lieberman17:01:57

and it felt a little too "code smelly" to me

didibus17:01:09

user=> (set! *print-level* 10)
10
user=> (apply merge-with vector (repeatedly 10000000 #(hash-map :foo (rand-int 10))))
{:foo [[[[[[[[[# 8] 3] 1] 8] 7] 8] 6] 4] 7]}

didibus17:01:48

If the print-level is unbounded, and you try to print a very deeply nested data-structure, it will stackoverflow

🤯 2
didibus18:01:26

So I was wrong about my assumption that merge-with was the culprit 😄

pithyless18:01:56

I know we've gone off the deep-end at this point, but if you're doing a lot of operations that sound columnar, you may be interested in looking at https://github.com/scicloj/tablecloth

👀 2
Ben Lieberman18:01:32

such an interesting deep end tho 😆

didibus18:01:24

As for your original problem, this solves it:

user=> (reduce
  #_=>  (fn [m1 m2] (merge-with #(if (vector? %1) (conj %1 %2) (vector %1 %2)) m1 m2))
  #_=>  '({:foo "val1"} {:foo "val2"} {:foo "val3"}))
{:foo ["val1" "val2" "val3"]}
Or also works with apply:
user=> (apply
  #_=>  merge-with #(if (vector? %1) (conj %1 %2) (vector %1 %2))
  #_=>  '({:foo "val1"} {:foo "val2"} {:foo "val3"}))
{:foo ["val1" "val2" "val3"]}

Ben Lieberman18:01:07

yeah I ended up with almost exactly that yesterday

(apply merge-with 
              (fn [fst snd] (if (vector? fst) 
                              (conj fst snd) 
                              (vector fst snd))))

👏 2
pithyless18:01:15

When I benchmarked it on my machine, the apply merge-with was slightly faster than the reduce version (I suspect because the apply version internally calls directly to reduce1 ). But I did get it running faster with transducers: https://clojurians.slack.com/archives/C053AK3F9/p1675015405119949?thread_ts=1675009790.585169&amp;cid=C053AK3F9

didibus18:01:02

Ya, the reduce is redundant here, because merge-with already reduces. I think the reduce would do multiple merge-with calls in a loop, where as apply would have merge-with merge all of them in one loop. It's probably minimal either way

didibus18:01:29

I guess reduce1 does probably get replaced by a real reduce at some point when loaded, though I could not find where

pithyless18:01:39

> I think the reduce would do multiple merge-with calls in a loop, where as apply would have merge-with merge all of them in one loop. This is how I explained it to myself. But it was just a REPL micro-benchmark. I wonder if a warmed up server would JIT it away. But then the real performance always depends on your data anyway, so 🤷

didibus18:01:29

At only 150ms faster at 1 million N isn't that big either. It depends what you're going for, but millisecond performance improvements at large N are normally not worth a lot of time investiment.

pithyless19:01:03

> At only 150ms faster at 1 million N isn't that big either. Sure, but I don't think that's a fair assessment. Those numbers show a 2.7x speedup, with the exact same algorithm (we're just conj'ing onto a collection). So I would argue it's not CPU bound and the difference in speed can only be explained by a lot more work being done allocating memory and GC. I'm sure if you profile it from a memory-point-of-view it will show a different story.

pithyless19:01:43

Whether that difference is important to you, depends on how often you're doing it and what other potential resources you are starving. It may or may not be a concern for your specific application.

didibus23:01:04

It doesn't really matter if its allocation based, the impact is still 150ms to alloc/reclaim memory, and only for N = 1 million. But ya, it depends what you're doing, but most people aren't doing high performance with Clojure anyways. Maybe if you were sensitive to GC pauses, but again, I don't like advice that considers outlier use-cases. For most use-cases it doesn't seem to matter. I think if you want better performance, I'd rethink why I'm even having to do this merging of maps in the first place. Better data model and data-structures might save a lot more time.

didibus23:01:32

I see a lot of people waste their time benchmarking between apply and reduce, and it's not a good use of time in my opinion. If it's for fun and experimentation it's fine, but if you're implementing an app, pick whichever and move on. Then when you have a full app, if you're not happy with the performance of certain things, profile and optimize as needed, maybe you do switch something from reduce to apply at that point, but only because you saw it in the flame graph 😛