Fork me on GitHub
Joshua Suskalo00:04:18

I've been working with some dynamic variables for a little project I'm using, and it seems like they aren't working for me. My setup is like so: I have a namespace which creates a dynamic variable, and defines a macro which includes binding it. In another namespace, I use said namespace (this works the same when I require it too), define a function, and call it from within the body of a usage of the macro, and while in that function the binding value isn't visible. However, whenever I use the cider debugger and debug either the function being called, or the usage of the macro, it works as intended.

Joshua Suskalo00:04:54

Is there some gotcha with dynamic variables that would cause this? Or is there something else about my setup which is going wrong?


My guess would be you are binding the macro during macro expansion, but not in the expansion of the macro

Joshua Suskalo00:04:21

behavior is identical when the usage of the macro is substituted for its macroexpansion


And what is that?

Joshua Suskalo01:04:08

(restart-case (analyze-logs '("LOG: Hello, world!"
                              "LOG: This is a second log entry"
                              "LOG: "
                              "LOG: hey"
                              "ERROR: "))
  ::exit (fn [] (throw (ex-info "exit" {}))))

(binding [*restarts* (merge *restarts*
                                   (fn [args__14292__auto__]
     '("LOG: Hello, world!"
       "LOG: This is a second log entry"
       "LOG: "
       "LOG: hey"
       "ERROR: "))
       (condp #(semaphore.proto/is-target? %2 %1) e__14300__auto__
         :semaphore.core/jump-target14766 (fn 
                                             (ex-info "exit" {})))
         (throw e__14300__auto__))
       (semaphore.proto/args e__14300__auto__)))))

Joshua Suskalo01:04:16

Sent in a reply so as not to put a lot of code inline


Maybe you are constructing something like a lazy seq which is being realized outside the scope of the binding

Joshua Suskalo01:04:35

No sequences are produced

Joshua Suskalo01:04:54

maybe that's it

Joshua Suskalo01:04:58

No sequences are in the macro etc, but the return value produces a lazy value

Timofey Sitnikov13:04:00

Good morning, I have a function that I think produces a lazy sequence:

(defn all-files []
  (let [grammar-matcher (.getPathMatcher 
    (->> "./resources/"
         (filter #(.isFile %))
         (filter #(.matches grammar-matcher (.getFileName (.toPath %))))
         (map #(.getAbsolutePath %))))) 
It outputs a sequence of all txt files in the tree. When I begin to use sequence with a map that will read each file and print out the file name, I end up getting the following error:
Execution error (NullPointerException) at (REPL:1).
I am assuming that this is because the lazy sequence is not done and when I consume it, it runs out of completed items and then hits the error. Is that a good assumption?

Alex Miller (Clojure team)13:04:51

maybe. might want to (clojure.repl/pst *e) when you get it to see the stack trace

Timofey Sitnikov15:04:37

Never did the stack tracing, will try to figure it out, thank you.

Alex Miller (Clojure team)13:04:50

(you'll want to be careful reading resources as files too - this won't work if you package this code+resources in a jar)


I want to do a couple http/rest api calls. The api endpoint has rate-limiting. I am looking for a good way to structure the workflow. So say I start with 1 api call, then update an atom based on the result, then do more calls. Sometmes I have to iterate throuh paged results, etc. Is there some kind of example with core.async avaiable, where I could see how to structure a workflow like that?

jjttjj16:04:14 the source here is pretty small and worth checking out even if you don't want to use the library

Alex Miller (Clojure team)16:04:48

we have some work in progress towards adding something like this to core.async, @U050ECB92 can probably drop a gist

👀 6

I am using throttler. But I find thst he orchestration code I have is really bad. In javascript there are a ton of libraries that use promises to structure workflows. AndI would like to write clj + cljs code with core.async that has similar functionality.


The actual code that does the work is easy to write in clj + cljs.


But the coordination to structure a workflow is pretty difficult


Iteration is feedback


which is to say, if you have something like clojure.core/iterate, the way it works is it produces a seq by calling a function to get the first element, and then feeding that first element back into the function to get the rest


iteration is what you want for paginated apis, because each page is a page of results + some "next" value that you use to get the next page


with core.async there are sort of two ways to build an interative process


1. take a function and iterate it (lifting a function into a process) 2. take a process and iterate it


iterating a a function to turn it in to a process is pretty straightforward, it looks just like using iterate to build a lazy sequence, but instead of constructing a lazy seq you are sending to a channel


iterating a process involves adding a backward edge (like another channel), that takes data from the output and feeds it back into the input


Thanks @U0NCTKEV8 this is what I need. Do you know where I might get some kind of example for this two approaches?


I do not. is a seq version of unfold(or iterate), you can write something similar that sends output to a channel instead of cons'ing up a seq


@U0NCTKEV8 thanks .. very helpful


step! is a function that given a token, hits the http endpoint to fetch data - returning a channel with that data. when token is nil , that signifies the initial call to step! :vsf the vsf argument is a function taking the result of step! (AKA fetched api data) and extract a collection of the juicy bits :kf would take the fetched page data and produce the token that, when given to step! grabs the next page


in your case @UCSJVFV35, you would put all the retries inside step!


the gist above is a way to consume iterated api patterns generically. it is not a helper for retries


the core operation, the step! argument, is to fetch one page (no matter how many retries it takes)


I admit that the docstring is a bit much


:kf = key function :vsf = values (plural) function


Thank you very much


I know anonymous functions with multiple arguments aren’t super popular here, but I hope someone enjoys adding this to their utils namespace:

(defn map-curry [f coll]
  (map #(apply f %) coll))

(map-curry #(hash-map %2 %1}) {:a 1 :b 2})
;; => ({1 :a} {2 :b})
;; Compare to:
;; (map (fn [[k v]] (hash-map %v %k))  {:a 1 :b 2})

;; Also works on tuples!
(map-curry #(+ % 1 (/ %2 %3)) [[1 2 3] [4 5 6]])
;; => (8/3 35/6)
EDIT: I got it backwards, it’s actually uncurrying so a better name would be map-uncurry


where does the curry come in?

☝️ 3
🍛 3

From the fact that turns a tuple into individual arguments. For example here’s type of curry in haskell:

curry :: ((a, b) -> c) -> a -> b -> c


Though now that I look at it it’s actually uncurrying, not currying 😅


uncurry :: (a -> b -> c) -> (a, b) -> c


Seeing the construct #(apply f %) reminds me of a question. Rather than the anonymous fn syntax, I prefer the higher-order-fn approach of (partial apply f). However, that used to come with a performance penalty. I see that`partial`, but I thought there was also something about getting it close to anonymous function performance in a release at one point? I’m not seeing it though, so maybe I was wrong. I’m wondering if these two constructs are going to be about the same in performance, or if anonymous functions are still faster. Does anyone know please?


in terms of performance the problem is apply not partial ; )


The question is in terms of: (partial some-fn) vs. #(some-fn %)


The only reason I mentioned apply was because that was the construct in the above message from Max


I’m influenced by the fact that Rich has said he considers partial to be less idiomatic than an anonymous function.


I hadn’t heard that. Thank you


(e/qb 1e5
  ((partial tmpfn) nil)
  (#(tmpfn nil)))
;; => [7.33 4.55]
result in ms


Do you mean?

(e/qb 1e5
  ((partial tmpfn) nil)
  (#(tmpfn %) nil))


An almost verbatim reproduction of the test from that issue:

Clojure 1.10.2
user=> (require '[criterium.core :refer [bench]])

user=> (let [f (partial + 1 1)] (bench (f 1 1)))
Evaluation count : 575498700 in 60 samples of 9591645 calls.
             Execution time mean : 98.891101 ns
    Execution time std-deviation : 1.024939 ns
   Execution time lower quantile : 98.103690 ns ( 2.5%)
   Execution time upper quantile : 100.146501 ns (97.5%)
                   Overhead used : 5.615028 ns

Found 4 outliers in 60 samples (6.6667 %)
	low-severe	 3 (5.0000 %)
	low-mild	 1 (1.6667 %)
 Variance from outliers : 1.6389 % Variance is slightly inflated by outliers

user=> (let [f (fn [a b] (+ 1 1 a b))] (bench (f 1 1)))
Evaluation count : 6352825620 in 60 samples of 105880427 calls.
             Execution time mean : 3.946144 ns
    Execution time std-deviation : 0.466162 ns
   Execution time lower quantile : 3.811868 ns ( 2.5%)
   Execution time upper quantile : 3.993066 ns (97.5%)
                   Overhead used : 5.615028 ns

Found 1 outliers in 60 samples (1.6667 %)
	low-severe	 1 (1.6667 %)
 Variance from outliers : 77.1883 % Variance is severely inflated by outliers


@U051N6TTC A silly mistake, but the result is basically identical


In my case, the result is quite different. :) 99ns vs 4ns.


This is why I don’t trust myself benchmarking things like this 🙂


Thank you for the comparisons


I imagine, the results may also vary greatly between JVMs. But I'm too lazy to actually check it. Especially since I almost never use partial myself.


@U2FRKM4TW probably because you're declaring the function in let, and #{ wastes time declaring the function on the fly with each iteration of the bench


That lambda ends up being compiled as a class, once. It's not recompiled on each execution.


partial is useful when you have a multi-arity function that you want to curry.


On Cljs the definitions of stuff like partial and comp also generate a lot of JS. But OTOH if some dependency is using them you will get that JS anyway so why not use them directly too...