This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-04-01
Channels
- # announcements (19)
- # asami (4)
- # babashka (34)
- # beginners (137)
- # calva (22)
- # cider (4)
- # clj-kondo (25)
- # cljs-dev (4)
- # clojure (67)
- # clojure-australia (1)
- # clojure-berlin (1)
- # clojure-europe (35)
- # clojure-germany (3)
- # clojure-nl (5)
- # clojure-serbia (3)
- # clojure-uk (8)
- # clojuredesign-podcast (2)
- # clojurescript (11)
- # conjure (56)
- # data-oriented-programming (1)
- # datascript (1)
- # datomic (6)
- # deps-new (11)
- # eastwood (1)
- # fulcro (11)
- # honeysql (48)
- # inf-clojure (1)
- # jobs (1)
- # joker (6)
- # lsp (26)
- # malli (2)
- # meander (3)
- # off-topic (48)
- # pathom (4)
- # polylith (4)
- # re-frame (19)
- # releases (2)
- # remote-jobs (1)
- # rewrite-clj (127)
- # shadow-cljs (6)
- # spacemacs (3)
- # tools-deps (43)
- # xtdb (16)
I've been working with some dynamic variables for a little project I'm using, and it seems like they aren't working for me. My setup is like so: I have a namespace which creates a dynamic variable, and defines a macro which includes binding
it. In another namespace, I use
said namespace (this works the same when I require
it too), define a function, and call it from within the body of a usage of the macro, and while in that function the binding value isn't visible. However, whenever I use the cider debugger and debug either the function being called, or the usage of the macro, it works as intended.
Is there some gotcha with dynamic variables that would cause this? Or is there something else about my setup which is going wrong?
My guess would be you are binding the macro during macro expansion, but not in the expansion of the macro
behavior is identical when the usage of the macro is substituted for its macroexpansion
(restart-case (analyze-logs '("LOG: Hello, world!"
"LOG: This is a second log entry"
"LOG: "
"LOG: hey"
"blah"
"ERROR: "))
::exit (fn [] (throw (ex-info "exit" {}))))
(binding [*restarts* (merge *restarts*
#:user{:exit
(fn [args__14292__auto__]
(make-jump
:semaphore.core/jump-target14766
args__14292__auto__))})]
(try
(analyze-logs
'("LOG: Hello, world!"
"LOG: This is a second log entry"
"LOG: "
"LOG: hey"
"blah"
"ERROR: "))
(catch
semaphore.signal.Signal
e__14300__auto__
(apply
(condp #(semaphore.proto/is-target? %2 %1) e__14300__auto__
:semaphore.core/jump-target14766 (fn
[]
(throw
(ex-info "exit" {})))
(throw e__14300__auto__))
(semaphore.proto/args e__14300__auto__)))))
Sent in a reply so as not to put a lot of code inline
Maybe you are constructing something like a lazy seq which is being realized outside the scope of the binding
No sequences are produced
actually wait
maybe that's it
That was it
No sequences are in the macro etc, but the return value produces a lazy value
Thanks!
Good morning, I have a function that I think produces a lazy sequence:
(defn all-files []
(let [grammar-matcher (.getPathMatcher
(java.nio.file.FileSystems/getDefault)
"glob:*.{txt}")]
(->> "./resources/"
file-seq
(filter #(.isFile %))
(filter #(.matches grammar-matcher (.getFileName (.toPath %))))
(map #(.getAbsolutePath %)))))
It outputs a sequence of all txt
files in the tree. When I begin to use sequence with a map that will read each file and print out the file name, I end up getting the following error:
"/home/my_home/clojure/./resources/data/4/file1.txt"
"/home/my_home/clojure/./resources/data/4/file2.txt"
"/home/my_home/clojure/./resources/data/4/file3.txt"
"/home/my_home/clojure/./resources/data/4/file4.txt"
Execution error (NullPointerException) at (REPL:1).
null
I am assuming that this is because the lazy sequence is not done and when I consume it, it runs out of completed items and then hits the error. Is that a good assumption?maybe. might want to (clojure.repl/pst *e)
when you get it to see the stack trace
Never did the stack tracing, will try to figure it out, thank you.
(you'll want to be careful reading resources as files too - this won't work if you package this code+resources in a jar)
I want to do a couple http/rest api calls. The api endpoint has rate-limiting. I am looking for a good way to structure the workflow. So say I start with 1 api call, then update an atom based on the result, then do more calls. Sometmes I have to iterate throuh paged results, etc. Is there some kind of example with core.async avaiable, where I could see how to structure a workflow like that?
https://github.com/brunoV/throttler the source here is pretty small and worth checking out even if you don't want to use the library
we have some work in progress towards adding something like this to core.async, @U050ECB92 can probably drop a gist
Thanks @U064UGEUQ and @U064X3EF3
I am using throttler. But I find thst he orchestration code I have is really bad. In javascript there are a ton of libraries that use promises to structure workflows. AndI would like to write clj + cljs code with core.async that has similar functionality.
which is to say, if you have something like clojure.core/iterate, the way it works is it produces a seq by calling a function to get the first element, and then feeding that first element back into the function to get the rest
iteration is what you want for paginated apis, because each page is a page of results + some "next" value that you use to get the next page
1. take a function and iterate it (lifting a function into a process) 2. take a process and iterate it
iterating a a function to turn it in to a process is pretty straightforward, it looks just like using iterate to build a lazy sequence, but instead of constructing a lazy seq you are sending to a channel
iterating a process involves adding a backward edge (like another channel), that takes data from the output and feeds it back into the input
Thanks @U0NCTKEV8 this is what I need. Do you know where I might get some kind of example for this two approaches?
I do not. https://github.com/clj-commons/useful/blob/master/src/flatland/useful/seq.clj#L129-L148 is a seq version of unfold(or iterate), you can write something similar that sends output to a channel instead of cons'ing up a seq
@U0NCTKEV8 thanks .. very helpful
step!
is a function that given a token, hits the http endpoint to fetch data - returning a channel with that data. when token is nil
, that signifies the initial call to step!
:vsf
the vsf argument is a function taking the result of step!
(AKA fetched api data) and extract a collection of the juicy bits
:kf
would take the fetched page data and produce the token that, when given to step!
grabs the next page
in your case @UCSJVFV35, you would put all the retries inside step!
the gist above is a way to consume iterated api patterns generically. it is not a helper for retries
the core operation, the step!
argument, is to fetch one page (no matter how many retries it takes)
I know anonymous functions with multiple arguments aren’t super popular here, but I hope someone enjoys adding this to their utils namespace:
(defn map-curry [f coll]
(map #(apply f %) coll))
(map-curry #(hash-map %2 %1}) {:a 1 :b 2})
;; => ({1 :a} {2 :b})
;; Compare to:
;; (map (fn [[k v]] (hash-map %v %k)) {:a 1 :b 2})
;; Also works on tuples!
(map-curry #(+ % 1 (/ %2 %3)) [[1 2 3] [4 5 6]])
;; => (8/3 35/6)
EDIT: I got it backwards, it’s actually uncurrying so a better name would be map-uncurry
From the fact that turns a tuple into individual arguments. For example here’s type of curry
in haskell:
curry :: ((a, b) -> c) -> a -> b -> c
Seeing the construct #(apply f %)
reminds me of a question. Rather than the anonymous fn syntax, I prefer the higher-order-fn approach of (partial apply f)
. However, that used to come with a performance penalty.
I see that https://clojure.atlassian.net/browse/CLJ-1430`partial`, but I thought there was also something about getting it close to anonymous function performance in a release at one point? I’m not seeing it though, so maybe I was wrong. I’m wondering if these two constructs are going to be about the same in performance, or if anonymous functions are still faster. Does anyone know please?
The only reason I mentioned apply
was because that was the construct in the above message from Max
I’m influenced by the fact that Rich has said he considers partial
to be less idiomatic than an anonymous function.
An almost verbatim reproduction of the test from that issue:
Clojure 1.10.2
user=> (require '[criterium.core :refer [bench]])
nil
user=> (let [f (partial + 1 1)] (bench (f 1 1)))
Evaluation count : 575498700 in 60 samples of 9591645 calls.
Execution time mean : 98.891101 ns
Execution time std-deviation : 1.024939 ns
Execution time lower quantile : 98.103690 ns ( 2.5%)
Execution time upper quantile : 100.146501 ns (97.5%)
Overhead used : 5.615028 ns
Found 4 outliers in 60 samples (6.6667 %)
low-severe 3 (5.0000 %)
low-mild 1 (1.6667 %)
Variance from outliers : 1.6389 % Variance is slightly inflated by outliers
nil
user=> (let [f (fn [a b] (+ 1 1 a b))] (bench (f 1 1)))
Evaluation count : 6352825620 in 60 samples of 105880427 calls.
Execution time mean : 3.946144 ns
Execution time std-deviation : 0.466162 ns
Execution time lower quantile : 3.811868 ns ( 2.5%)
Execution time upper quantile : 3.993066 ns (97.5%)
Overhead used : 5.615028 ns
Found 1 outliers in 60 samples (1.6667 %)
low-severe 1 (1.6667 %)
Variance from outliers : 77.1883 % Variance is severely inflated by outliers
nil
@U051N6TTC A silly mistake, but the result is basically identical
I imagine, the results may also vary greatly between JVMs. But I'm too lazy to actually check it. Especially since I almost never use partial
myself.
@U2FRKM4TW probably because you're declaring the function in let
, and #{
wastes time declaring the function on the fly with each iteration of the bench