Fork me on GitHub
#clojure
<
2021-04-01
>
Joshua Suskalo00:04:18

I've been working with some dynamic variables for a little project I'm using, and it seems like they aren't working for me. My setup is like so: I have a namespace which creates a dynamic variable, and defines a macro which includes binding it. In another namespace, I use said namespace (this works the same when I require it too), define a function, and call it from within the body of a usage of the macro, and while in that function the binding value isn't visible. However, whenever I use the cider debugger and debug either the function being called, or the usage of the macro, it works as intended.

Joshua Suskalo00:04:54

Is there some gotcha with dynamic variables that would cause this? Or is there something else about my setup which is going wrong?

hiredman00:04:17

My guess would be you are binding the macro during macro expansion, but not in the expansion of the macro

Joshua Suskalo00:04:21

behavior is identical when the usage of the macro is substituted for its macroexpansion

hiredman01:04:50

And what is that?

Joshua Suskalo01:04:08

(restart-case (analyze-logs '("LOG: Hello, world!"
                              "LOG: This is a second log entry"
                              "LOG: "
                              "LOG: hey"
                              "blah"
                              "ERROR: "))
  ::exit (fn [] (throw (ex-info "exit" {}))))

(binding [*restarts* (merge *restarts*
                            #:user{:exit
                                   (fn [args__14292__auto__]
                                     (make-jump
                                      :semaphore.core/jump-target14766
                                      args__14292__auto__))})]
  (try
    (analyze-logs
     '("LOG: Hello, world!"
       "LOG: This is a second log entry"
       "LOG: "
       "LOG: hey"
       "blah"
       "ERROR: "))
    (catch
        semaphore.signal.Signal
        e__14300__auto__
      (apply
       (condp #(semaphore.proto/is-target? %2 %1) e__14300__auto__
         :semaphore.core/jump-target14766 (fn 
                                            []
                                            (throw
                                             (ex-info "exit" {})))
         (throw e__14300__auto__))
       (semaphore.proto/args e__14300__auto__)))))

Joshua Suskalo01:04:16

Sent in a reply so as not to put a lot of code inline

hiredman01:04:26

Maybe you are constructing something like a lazy seq which is being realized outside the scope of the binding

Joshua Suskalo01:04:35

No sequences are produced

Joshua Suskalo01:04:54

maybe that's it

Joshua Suskalo01:04:58

No sequences are in the macro etc, but the return value produces a lazy value

Timofey Sitnikov13:04:00

Good morning, I have a function that I think produces a lazy sequence:

(defn all-files []
  (let [grammar-matcher (.getPathMatcher 
                          (java.nio.file.FileSystems/getDefault)
                          "glob:*.{txt}")]
    (->> "./resources/"
         
         file-seq
         (filter #(.isFile %))
         (filter #(.matches grammar-matcher (.getFileName (.toPath %))))
         (map #(.getAbsolutePath %))))) 
It outputs a sequence of all txt files in the tree. When I begin to use sequence with a map that will read each file and print out the file name, I end up getting the following error:
"/home/my_home/clojure/./resources/data/4/file1.txt"
"/home/my_home/clojure/./resources/data/4/file2.txt"
"/home/my_home/clojure/./resources/data/4/file3.txt"
"/home/my_home/clojure/./resources/data/4/file4.txt"
Execution error (NullPointerException) at (REPL:1).
null
I am assuming that this is because the lazy sequence is not done and when I consume it, it runs out of completed items and then hits the error. Is that a good assumption?

Alex Miller (Clojure team)13:04:51

maybe. might want to (clojure.repl/pst *e) when you get it to see the stack trace

Timofey Sitnikov15:04:37

Never did the stack tracing, will try to figure it out, thank you.

Alex Miller (Clojure team)13:04:50

(you'll want to be careful reading resources as files too - this won't work if you package this code+resources in a jar)

awb9916:04:31

I want to do a couple http/rest api calls. The api endpoint has rate-limiting. I am looking for a good way to structure the workflow. So say I start with 1 api call, then update an atom based on the result, then do more calls. Sometmes I have to iterate throuh paged results, etc. Is there some kind of example with core.async avaiable, where I could see how to structure a workflow like that?

jjttjj16:04:14

https://github.com/brunoV/throttler the source here is pretty small and worth checking out even if you don't want to use the library

Alex Miller (Clojure team)16:04:48

we have some work in progress towards adding something like this to core.async, @U050ECB92 can probably drop a gist

👀 6
awb9917:04:42

I am using throttler. But I find thst he orchestration code I have is really bad. In javascript there are a ton of libraries that use promises to structure workflows. AndI would like to write clj + cljs code with core.async that has similar functionality.

awb9917:04:00

The actual code that does the work is easy to write in clj + cljs.

awb9917:04:28

But the coordination to structure a workflow is pretty difficult

hiredman19:04:39

Iteration is feedback

hiredman20:04:46

which is to say, if you have something like clojure.core/iterate, the way it works is it produces a seq by calling a function to get the first element, and then feeding that first element back into the function to get the rest

hiredman20:04:29

iteration is what you want for paginated apis, because each page is a page of results + some "next" value that you use to get the next page

hiredman20:04:44

with core.async there are sort of two ways to build an interative process

hiredman20:04:55

1. take a function and iterate it (lifting a function into a process) 2. take a process and iterate it

hiredman20:04:41

iterating a a function to turn it in to a process is pretty straightforward, it looks just like using iterate to build a lazy sequence, but instead of constructing a lazy seq you are sending to a channel

hiredman20:04:15

iterating a process involves adding a backward edge (like another channel), that takes data from the output and feeds it back into the input

awb9921:04:09

Thanks @U0NCTKEV8 this is what I need. Do you know where I might get some kind of example for this two approaches?

hiredman21:04:53

I do not. https://github.com/clj-commons/useful/blob/master/src/flatland/useful/seq.clj#L129-L148 is a seq version of unfold(or iterate), you can write something similar that sends output to a channel instead of cons'ing up a seq

awb9901:04:46

@U0NCTKEV8 thanks .. very helpful

ghadi02:04:24

step! is a function that given a token, hits the http endpoint to fetch data - returning a channel with that data. when token is nil , that signifies the initial call to step! :vsf the vsf argument is a function taking the result of step! (AKA fetched api data) and extract a collection of the juicy bits :kf would take the fetched page data and produce the token that, when given to step! grabs the next page

ghadi02:04:47

in your case @UCSJVFV35, you would put all the retries inside step!

ghadi02:04:39

the gist above is a way to consume iterated api patterns generically. it is not a helper for retries

ghadi02:04:27

the core operation, the step! argument, is to fetch one page (no matter how many retries it takes)

ghadi02:04:30

I admit that the docstring is a bit much

ghadi02:04:57

:kf = key function :vsf = values (plural) function

awb9902:04:35

Thank you very much

Max20:04:08

I know anonymous functions with multiple arguments aren’t super popular here, but I hope someone enjoys adding this to their utils namespace:

(defn map-curry [f coll]
  (map #(apply f %) coll))

(map-curry #(hash-map %2 %1}) {:a 1 :b 2})
;; => ({1 :a} {2 :b})
;; Compare to:
;; (map (fn [[k v]] (hash-map %v %k))  {:a 1 :b 2})

;; Also works on tuples!
(map-curry #(+ % 1 (/ %2 %3)) [[1 2 3] [4 5 6]])
;; => (8/3 35/6)
EDIT: I got it backwards, it’s actually uncurrying so a better name would be map-uncurry

dpsutton20:04:18

where does the curry come in?

☝️ 3
🍛 3
Max02:04:57

From the fact that turns a tuple into individual arguments. For example here’s type of curry in haskell:

curry :: ((a, b) -> c) -> a -> b -> c

Max02:04:30

Though now that I look at it it’s actually uncurrying, not currying 😅

Max03:04:41

uncurry :: (a -> b -> c) -> (a, b) -> c

quoll21:04:52

Seeing the construct #(apply f %) reminds me of a question. Rather than the anonymous fn syntax, I prefer the higher-order-fn approach of (partial apply f). However, that used to come with a performance penalty. I see that https://clojure.atlassian.net/browse/CLJ-1430`partial`, but I thought there was also something about getting it close to anonymous function performance in a release at one point? I’m not seeing it though, so maybe I was wrong. I’m wondering if these two constructs are going to be about the same in performance, or if anonymous functions are still faster. Does anyone know please?

ribelo21:04:08

in terms of performance the problem is apply not partial ; )

quoll21:04:10

The question is in terms of: (partial some-fn) vs. #(some-fn %)

quoll21:04:51

The only reason I mentioned apply was because that was the construct in the above message from Max

seancorfield21:04:52

I’m influenced by the fact that Rich has said he considers partial to be less idiomatic than an anonymous function.

quoll21:04:22

I hadn’t heard that. Thank you

ribelo22:04:33

(e/qb 1e5
  ((partial tmpfn) nil)
  (#(tmpfn nil)))
;; => [7.33 4.55]
result in ms

quoll22:04:08

Do you mean?

(e/qb 1e5
  ((partial tmpfn) nil)
  (#(tmpfn %) nil))

p-himik22:04:20

An almost verbatim reproduction of the test from that issue:

Clojure 1.10.2
user=> (require '[criterium.core :refer [bench]])
nil


user=> (let [f (partial + 1 1)] (bench (f 1 1)))
Evaluation count : 575498700 in 60 samples of 9591645 calls.
             Execution time mean : 98.891101 ns
    Execution time std-deviation : 1.024939 ns
   Execution time lower quantile : 98.103690 ns ( 2.5%)
   Execution time upper quantile : 100.146501 ns (97.5%)
                   Overhead used : 5.615028 ns

Found 4 outliers in 60 samples (6.6667 %)
	low-severe	 3 (5.0000 %)
	low-mild	 1 (1.6667 %)
 Variance from outliers : 1.6389 % Variance is slightly inflated by outliers
nil


user=> (let [f (fn [a b] (+ 1 1 a b))] (bench (f 1 1)))
Evaluation count : 6352825620 in 60 samples of 105880427 calls.
             Execution time mean : 3.946144 ns
    Execution time std-deviation : 0.466162 ns
   Execution time lower quantile : 3.811868 ns ( 2.5%)
   Execution time upper quantile : 3.993066 ns (97.5%)
                   Overhead used : 5.615028 ns

Found 1 outliers in 60 samples (1.6667 %)
	low-severe	 1 (1.6667 %)
 Variance from outliers : 77.1883 % Variance is severely inflated by outliers
nil

ribelo22:04:13

@U051N6TTC A silly mistake, but the result is basically identical

p-himik22:04:38

In my case, the result is quite different. :) 99ns vs 4ns.

quoll22:04:24

This is why I don’t trust myself benchmarking things like this 🙂

quoll22:04:34

Thank you for the comparisons

p-himik22:04:42

I imagine, the results may also vary greatly between JVMs. But I'm too lazy to actually check it. Especially since I almost never use partial myself.

ribelo22:04:43

@U2FRKM4TW probably because you're declaring the function in let, and #{ wastes time declaring the function on the fly with each iteration of the bench

p-himik22:04:11

That lambda ends up being compiled as a class, once. It's not recompiled on each execution.

3
p-himik22:04:47

partial is useful when you have a multi-arity function that you want to curry.

nilern07:04:29

On Cljs the definitions of stuff like partial and comp also generate a lot of JS. But OTOH if some dependency is using them you will get that JS anyway so why not use them directly too...