Fork me on GitHub

(def op "foo")
  ((intern *ns* (symbol op)))
  ((eval (read-string op)))
is any of those better? The use case is to resolve a function that is passed as cli arguments


What about resolve or require-resolve ?


sounds good


What am I doing wrong here? Why does

(ns server.transit
  (:refer-clojure :exclude [read]))

(defn read [input])
give me
; WARNING: read already refers to: #'clojure.core/read in namespace: server.transit, being replaced by: #'server.transit/read
when I evaluate the ns?

Alexis Schad09:05:13

Are you in CLJ or CLJS? What tools do you use?


Tried both clj and cljs. Calva (VS Code) on Windows. So, I am using it correctly?

Alexis Schad09:05:23

yes, it should work

gratitude 1
Alexis Schad09:05:00

the warning comes from the file itself or another file using server.transit?


When I evaluate the file itself.

Alexis Schad09:05:18

Are you using shadow-cljs? Or what's the compiler?

Alexis Schad09:05:44

ho you got the error from the REPL right?


I am using shadow, but am currently in clj, through Calva. Error's from the REPL, yes.


If you do this from the shadow-cljs prompt, w/o involving Calva, does the problem go away?


When I do (load-file "transit.clj") from Clojure CLI directly no warnings.


I think this might be Calva causing it. Please file an issue about it.

Alexis Schad09:05:40

I did reproduce it, and yes it's only happening in the Calva REPL instance, when using shadow-cljs cljs-repl there's no warning

🙏 1
gratitude 1

Thanks for the issue!


Thanks for the quick response, as always 🙂


The warning doesn't happen when I load another ns which uses server.transit.

Alexis Schad10:05:21

Are you sure you didn't evaluate server.transit before you loaded it?

Alexis Schad10:05:11

Because once it is defined, the warning won't show again (if you evaluate twice the file, there's only one warning the first time it is defined)


Makes sense, but I tried it again. Fresh jack-in > load file which uses server.transit > no warning about read.

👍 1

That makes sense to me who have checked what Calva is doing that causes this. 😃 Loading a file causes a switching of the ns, and there Calva injects (clojure.core/refer-clojure), which seems to interfere with the file loading. But it isn't transitive.

👍 2

You should be able to confirm my suspicion by trying it with Calva 2.0.262. And then it is introduced in 263.


Confirmed 🙂


Thanks! 🙏


Anyone able to explain how to translate this to clojure ? ((OAuthDesktopMobileAuthCodeGrant)(authorizationData.getAuthentication())).getAuthorizationEndpoint() I am not sure what the (authorizationData.getAuthentication())) extra brackets wrapping the object mean in java


(.getAuthorizationEndpoint ^OAuthDesktopMobileAuthCodeGrant (.getAuthentication data))
Think I have worked it out its a type cast so the above is equivalent, if I understand correctly ?


Yup, the brackets indeed seem to be for the type cast.


Your translation into Clojure looks good to me.


cheers, half the battle is knowing what to google when you are not familiar with java 🙂


I think I have a reason to use reducers, but I don't know what I'm doing. I have the following function

(defn summarise-n-games [N & {:keys [players structure pulls phi]
                       :or {players 6 structure :cyclic pulls 10 phi 0.1}}]
  (let [game-reporter (make-game-reporter :players players :structure structure :pulls pulls :phi phi)
        trial-results (take N (repeatedly game-reporter))
        winners (filter :correct trial-results)]
    {:popsize  players
     :structure (name structure)
     :pulls pulls
     :phi phi
     :win-freq (float (/ (count winners) N))
     :mean-round (stats/mean (map :rounds winners))
     :stdev-round (Math/sqrt (stats/variance (map :rounds winners)))
The (take N (repeatedly game-reporter)) is running a stochastic model, and the function reports summarised convergence statistics for the N runs of the model. From what I've read, this seems ideal to use reducers and would benefit from parallelisation. It's quite slow when I run it with 10,000 games under certain parameter settings.


1. I would start by identifying the primary performance culprits - there may be some low-hanging performance issues you can fix first (e.g. unnecessary lazy-seqs, too much GC, calling slow functions, unboxing/boxying, etc). If you don't know where to start with this, I'd start with a flamegraph profiler:


2. Depending on what you're doing inside the game-reporter, you may want to try to rewrite some of this code using transducers. You may be interested in looking over the docs: and also this helper library:


3. I'd consider reducers only after doing (1) and (2). At the very least after (1) you will be better prepared to pinpoint what exactly is hurting the performance most, which will help some of the performance-wise folks here provide you with more precise guidance.


PS. repeatedly can also take a count: (repeatedly N game-reporter)


good to know


and if you quickly want to get a feel for "is X worth parallelizing" -> you can run some benchmarks with map vs pmap


pmap doesn't give you all the controls you might want to fiddle with, but it's a quick way to just see how the program behaves if you spin up and process it in multiple threads


game-reporter is stochastic, and completely independent between runs. But depending on the parameters it can be pretty slow. It's a bit of an academic project, but I'm trying to replicate the results of some published work


yeah, but without knowing more, the first question should be - is game-reporter doing something I can easily fix to improve performance? (the flamegraph would help with this)


ok, thanks.


But if the runs are long enough to measure in seconds, you can try something like:

(time (doall (map #(game-reporter) (range 100))))

;; vs

(time (pmap #(game-reporter) (range 100)))


^ this may help answer the question: is it worth trying to parallelize via threads


time is not an exact science; for actual benchmarking I suggest using

Alexis Schad13:05:30

For parallel execution you can use pcalls (instead of above pmap+range): (apply pcalls (repeatedly N game-reporter))

☝️ 1
👍 2

The real work is in play So (time (doall (map #(do (play %1) nil) (make-game-sequence 100 :pulls 2 :phi 0.01)))) ; "Elapsed time: 13321.555974 msecs" (time (doall (pmap #(do (play %1) nil) (make-game-sequence 100 :pulls 2 :phi 0.01)))) ; "Elapsed time: 3851.344545 msecs"


(make-game-seqence) makes a bunch of starting games with the same parameters.


This is very helpful, thank you


If you don't want to keep the results in memory, just use dorun instead of doall


Next, I'd try to get a flamegraph for (dorun (map play (make-game-sequence 100 :pulls 2 :phi 0.01)))

👍 1

Is there a nicer way to do this?

(let [my-map {:a "a" :b "b" :c "c"}
      my-key :b]
  (->> (dissoc my-map my-key)
       (cons my-key)))
I want to doseq over a map's keys starting with a particular key.


@U02E9K53C9L Can you explain the problem you're trying to solve here? doseq is for side-effects, not manipulating and returning a data structure, and hash maps are inherently unordered...


Imagine a map relating channel ids to core.async channels. I want to select a channel which should be operated on first, then do the rest in any order.

(doseq [c chans]
  (async/put! c msg))


Ah, so you're just trying to produce the sequence of keys such that one is handled first and then "all of the others"... perhaps this is cleaner:

(let [key-order (cons my-key (disj (set (keys my-map)) my-key)]
  (doseq [k key-order :let [c (get chans k]]
    (async/put! c msg)))


Or unroll it:

(async/put! (get chans my-key) msg)
(doseq [[k c] chans :when (not= k my-key)]
  (async/put! c msg))


Thought about that second approach, but not quite happy with separating – doesn't make it obvious that all channels are handled the same, just the ordering changes. But it is much easier to read than your first (also nice!) suggestion. I guess I'll stop fiddling and just pick one 🙂 Thank you gratitude


"Perfection is the enemy of the good." -- Gustave Flaubert, French realist novelist (1821-1880) 🙂

😅 1

I have a stupid question: Is it possible to define Java interface exclusively in Clojure?


Thank you Alex!


Made a little contribution to ClojureDocs under for. It took me forever to get any kind of useful answer to how to iterate through nested data

🎉 2
👏 4

is there an idiomatic alternative when I find myself wanting to write (apply or coll)? (`or` being a macro that one will not fly) I can do (some identity coll) but using identity as predicate always feels a little redundant.

Alexis Schad18:05:19

That's the way, there's no simpler one

Alexis Schad18:05:19

If you use this pattern several times in your code you can defined a function like (def or-seq (partial some identity))


ok duly noted. Interestingly I seem to run into this pattern way more often than I would have thought. Thanks for the input!


weird, i almost never run into this pattern


have a wider example you can share?


I end up playing with code golfing in clojure…perhaps why I run into this


In this particular case I was writing a recursive algorithm which does a search through a search space, branching n times at each level. I wanted the algorithm to return a single value, the answer, but internally it needs to do something like map on each level to implement the branching. Thus (apply or (map #(…recursive call here) branching-coll))


just an example and not fully cooked, can’t remember the previous instance, was a few days ago


where the or would have made it so that the recursive call returns a single value, the only non-nil one


but (some identity (map… works as well


just doesn’t correspond as closely to my brain as or

Alexis Schad19:05:49

@U050ECB92 was right to ask you, I don't use this pattern neither. In you small example that makes no sense to use a some after a map, you can just use some directly


(some #(...recursive call here) branching-coll) -- no need for identity and map


but some returns the value from the coll, not the value returned from the predicate right?


I need the value returned from the recursive call

Alexis Schad19:05:15

it is from the predicate, else it would have been a (first (filter pred coll))


([pred coll])
  Returns the first logical true value of (pred x) for any x in coll,
  else nil.  One common idiom is to use a set as pred, for example
  this will return :fred if :fred is in the sequence, otherwise nil:
  (some #{:fred} coll)


i.e. there is a difference between (some identity (map f coll)) and (some f coll) right?


aha…ok makes sense…glad I assumed I was an idiot : ) thanks guys


guess it would help if I read the docs


If in doubt, try it in the REPL:

dev=> (some even? [1 2 3 4 5])

Alexis Schad19:05:16

Btw your initial thought about some is a common pattern that has no native function in clojure, like I said above you have to (first (filter


Is there a short way to call a function if it's not nil? Like in kotlin f?.invoke()

Cora (she/her)21:05:36

(when f (f)) should work


too long, I wanted something like: (safe f arg1 arg2 ... argn) or (safe-> f (eval arg2 ... argn) (do-something-else))


I can write a macro, but may be there is something already, can't google it

Cora (she/her)21:05:18

that's too long??


In my current example with when:

(let [data (-> % .body utils/keywordize-blunt)]
  (if (= (.statusCode %) 200) 
    (when on-success (on-success data)) 
    (when on-error (on-error  data))))
with custom safe fn:
  (if (= (.statusCode %) 200) on-success on-error)
  (-> % .body utils/keywordize-blunt))

Cora (she/her)21:05:14

the former is a lot more intelligible to me

Cora (she/her)21:05:26

but I'm sure you could write a macro with that


it's just:

(defn- safe [f & args]
  (when f (apply f args)))
I could rename it to safe-call for readability

Cora (she/her)21:05:13

you may not want to even bother keywordizing unless there's on-success or on-error

Cora (she/her)21:05:50

(defmacro safe [f & args]
  `(when ~f
     (~f ~@args)))

Cora (she/her)21:05:05

I'm bad at macros but I think that would work for preventing args from being evaluated if there's no f

Cora (she/her)21:05:48

(defmacro safe [f & args]
  `(when ~f
     (~f ~@args)))

 '(safe nil 1))
;; => (if nil (do (nil 1)))

 '(safe identity 1))
;; => (if identity (do (identity 1)))

👍 2

Thank you! I started thinking on your impression: > the former is a lot more intelligible to me. May be something like this is better?

#(let [callback (if (= (.-statusCode %) 200) on-success on-error)]
   (when callback (callback (-> % .-body utils/keywordize-blunt ))))

Cora (she/her)21:05:32

you could make that a when-let

Cora (she/her)21:05:55

instead of just let and then you can remove the inner when

👍 1

seems to be more readable and no new macro is needed

Cora (she/her)21:05:13

very cool ☺️


when-let also avoids evaluating ~f twice, a cardinal sin in macrology.

Drew Verlee23:05:49

((or on-success on-error) data)

Drew Verlee23:05:30

I wouldn't be happy that those are optional functions though.

Drew Verlee23:05:45

What happens if they're both not there?


when-let handles it


actually, I was just trying to get some api (for like I had with cljs ajax in the past (they allowed passing optional handlers (as I remebmer)

Jon Olick22:05:04

so STM is basically atom's, but working on multiple things at once atomically in the same basic fashion?

Jon Olick22:05:42

That fashion being, record the old value that you started the calculations with, if they are the same then lock / update all of them or something

Jon Olick22:05:26

It would be nice to atomically update two or more atoms at once (without going full STM)

Jon Olick22:05:11

can probably emulate that with a hashmap or something, where mutliple elems of the has are considered jointly updated

Jon Olick22:05:33

but in the case where you really want two disparate things to update atomically, being able to do multiple at once would be a good thing


atoms are not suited for this purpose. often you can put both pieces of information in the same atom though


(def customer-name (atom "microsoft")) 
(def customer-id (atom 2))
(def customer-info (atom {:name "microsoft" :id 2}))

Jon Olick22:05:39

but in the situation where you don't want to rewrite your code-base, updating two or more atomically would be preferrable

Jon Olick22:05:25

one way to do that would be to have a multi-atom

Jon Olick22:05:37

kind of like where you conglomerate multiple atoms into the same object

Jon Olick22:05:35

then you could have a multi-swap! or something equivalent

Jon Olick22:05:58

implementation would look just like STM I bet... but still thinking about the details

Jon Olick22:05:50

I bet you could implement multi-swap! without a multi-atom


That sounds like a fun experiment but if you can just use one atom the I would very much recommend that, given you are using it as a reference for data. Maybe try and put the multiple atom code side by side with a single atom version. It gets clearer that way. I did this for a little game I made, and in the end I liked the single atom version much better.

Alex Miller (Clojure team)22:05:29

Why are you working so hard not to use refs to coordinate?

Alexis Schad22:05:35

> but in the situation where you don't want to rewrite your code-base, updating two or more atomically would be preferrable This is not supposed to happen. The state management should be a very small part of your code-base and if it's not the case you're probably doing it wrong

Jon Olick22:05:37

this is not so much a question of should I, its could I

Alex Miller (Clojure team)22:05:15

If you need to coordinate changes to multiple pieces of state, Rich spent like 6 months building a solution to this exact problem in the language, so you don't have to :)

hiredman23:05:23 is a great paper if you are planning to write your own stm. it integrates channels and stm, so you can do things like atomically read from a channel, write to another, and set a stm ref to some value

☝️ 1

core.async can't even do atomic(transactional) operations that read from a channel then write to a channel, let alone combine it the stm

Alex Miller (Clojure team)00:05:30

that sounds like an interesting paper


Beautifully typeset too


Apropos of this conversation the paper builds its stm using what it calls a "k-swap protocol" on top of atomic references, but I forget how deep into the details of what that means the paper gets, but as far as I can isn't much different from how clojure implements stm transactions (using atomic references feels morally superior, but if you use them to implement a complex locking protocol it is just locks)

Alex Miller (Clojure team)02:05:34

the most clever part of the Clojure STM is that it doesn't require any global coordinator (similar to core.async alts over channels actually)

Alex Miller (Clojure team)02:05:03

but I'm not sure that's particularly clever, mostly just well designed and implemented

Jon Olick03:05:14

done with the unholy fusion of STM and atoms (though need to test, so not done done yet)