Fork me on GitHub

yes it's delightful learning with and from y'all 馃槃

Noah Bogart15:11:00

idiom check: i have a map and new item to add to the map and a list of keys i want to attach the new item to:

=> map
{:a 1 :b 2}
=> item
=> keys
[:c :d :e]
=> new-map
{:a 1 :b 2 :c 3 :d 3 :e 3}

Noah Bogart15:11:02

is this best accomplished with reduce?

(let [old-map {:a 1 :b 2}
      item 3
      ks [:c :d :e]
      new-map (reduce
                (fn [m k]
                  (assoc m k item))


(merge m1 (zipmap ks (repeat 3)))

Noah Bogart15:11:09

there you go, thank you

Noah Bogart15:11:15

i had a feeling something like that existed

Ben Sless15:11:58

(reduce (fn [m k] (assoc m k x)) m ks)
Is about the same length of characters, and more light weight
(reduce #(assoc %1 %2 x) m ks)
If you like this syntactic sugar

Noah Bogart16:11:11

you know i do

Ben Sless16:11:19

I'm not a fan of that particular macro, too magical

Ben Sless16:11:58

this is valid: #(assoc %1%2 x) 馃槰

Noah Bogart17:11:24

yeah, i tend to stay away from the fn macro when there鈥檚 more than 1 input


My goal is to initialize an object only once when invoked by multiple threads. I came up with this solution and it seems to be working fine. But I鈥檇 like to know if there are any pitfalls I鈥檓 overseeing. Specifically worried about some threads getting nil instead of the instantiated object.

(def my-obj (atom nil))

(defn init-my-obj
  (or @my-obj
      (do (compare-and-set! my-obj nil (SomeClass.))


(def my-obj (delay (SomeClass.)))
and then @my-obj wherever you need it

馃憤 2

@UMPJRJU9E I think there is a risk of multiple instantiations with your implementation if init-my-obj is called on multiple threads. The reason is that the atom can be deref-ed as nil on multiple threads before any of them have set the instance.

馃憤 1

any time you have something that derefs an atom to check the value followed by a mutation, you have a potential race

馃憤 2

And this is also good reading material:

馃憤 1
Alex Miller (Clojure team)17:11:16

not a potential race, an actual race :)

Alex Miller (Clojure team)18:11:18

delay or defonce are two good tools for this. delay is better from an aot compilation point of view, but requires consumers to deref. defonce is easier for consumers but will be evaluated during aot, which you often don't want.


> any time you have something that derefs an atom to check the value followed by a mutation, you have a potential race Also good to know about swap-vals! if you do need both:!

Alex Miller (Clojure team)18:11:33

often I combine def / delay with a wrapper function that does the deref for a friendlier combo


if you use force instead of deref it will behave as identity on non-delays


Sorry, I only posted partial code. I did think of delay but it won鈥檛 fit my case since I need to pass params while initalizing the class which might not always be the same


Can this be converted to a delay?

(def sem (atom nil))

(defn- max-instances-semaphore
  (or @sem
      (do (compare-and-set! sem nil (Semaphore. permits))


that is building races for the semaphores


if any threads have permits, then you call that replacing the semaphore object, and those threads don't lexically capture sem, when those threads release permits they will release on the new semaphore increasing the number of permits it has


Ironic, I know, but how do I go about doing something like this? I could always pass the instantiated semaphore to the threads but was thinking if there is construct for doing something like this


you will always be better off passing


> if any threads have permits, then you call that replacing the semaphore object, and those threads don鈥檛 lexically capture sem, when those threads release permits they will release on the new semaphore increasing the number of permits it has Yeah, the goal is avoid this exactly


in general, I find it is usually better to avoid using something like a semaphore at all, and instead limit concurrency via something like a fixed size executor


The thing is we鈥檙e using integrant and I need to instantiate the Semaphore elsewhere and pass it to the function that will be invoked parallely. The function api would鈥檝e been simpler if I could just pass the max permits instead of the actual Semaphore.


> in general, I find it is usually better to avoid using something like a semaphore at all, and instead limit concurrency via something like a fixed size executor Yup, totally agree, but in this special case, the objects go stale quite quickly (browser drivers) and a Semaphore seemed like a quick way to cap the objects.


use an executor

馃憤 1

(my previous stance was "maybe use an executor" that explanation makes it definite)

Alex Miller (Clojure team)18:11:22

have you considered actually using java.util.concurrent.Semaphore ?

Alex Miller (Clojure team)18:11:55

or is that what you're using


using lots of them

Alex Miller (Clojure team)18:11:26

generally, if you're having trouble def'ing something, maybe don't :) construct at startup and pass to those that need it

馃憤 1

a fixed size executor will also let you adjust the number of threads doing work on the fly without creating a sequence of semaphores


(you can do that on a single semaphore as well but you may need some extra accounting)


> generally, if you鈥檙e having trouble def鈥檌ng something, maybe don鈥檛 :)聽construct at startup and pass to those that need it That鈥檚 what I was doing passing an instantiated Semaphore. But thought passing the max-instance-number would simplify the API than passing in a construct that would cap the number of objects (object pool, semaphore), but I guess it鈥檚 not worth it.


hiredman, it is the number of objects I want to cap and not the threads


that is largely the same thing


Anyway, there is also a pooled version using ArrayBlockingQueue


objects don't magically pop into existence, code has to create them, and that code has to run somewhere

Noah Bogart18:11:43

another idiom check! i have an app-state map atom, and i鈥檝e taken to writing pairs of functions that operate on it, a do-X! function that calls swap! and a do-X-impl that takes in the deref鈥檇 map and returns the whole map with changes applied. I鈥檓 doing this because I need to keep data in multiple keys in sync (vs having a data race by deref鈥檌ng and swap!ing multiple times in a row). is this an okay means of handling this? here鈥檚 an example:

(defn ^:private register-game-impl
  [{:keys [games users uid->gameid gameid->uids]} game]
  (let [gameid (:gameid game)
        games (assoc games gameid game)
        uids (keep :uid (:players game))
        uid->gameid (merge uid->gameid (zipmap uids (repeat gameid)))
        gameid->uids (assoc gameid->uids gameid (set uids))]
    {:games games
     :users users
     :uid->gameid uid->gameid
     :gameid->uids gameid->uids}))

(defn register-game! [game]
  (swap! app-state register-game-impl game))


@UEENNMX0T not directly related to your question, but I would push back on whether you need to be building and storing these caches/indexes in the atom. What would happen if state was just {:games ,,, :users ,,,} and the indexes were generated and passed around in local bindings where they would actually be needed?


ie. are multiple unrelated "threads" operating on these indexes (so you have to coordinate them) AND are the indexes used so often that they are worth it to generate and cache upfront AND are the read vs write statistics skewed towards reading (i.e. if you're swap!-ing more often than deref-ing, this could in fact be a net-negative for performance).

Noah Bogart19:11:06

single threaded, single jar running on a digital ocean box, maybe like 100 players at a given time playing less than 50 games (2 player card game) so impact is low either way

Noah Bogart19:11:25

i did the caching to make certain 鈥渜ueries鈥 easy for myself


my point was, if impact is low, and the caches are localized - and quick to generate - it may be helpful to not try to keep them up-to-date in the atom (which complicates the code for all updates of the atom)

馃憤 1

you can have a function that generates uid->gameid for easy queries without trying to keep it in sync when doing swaps

Noah Bogart19:11:37

that makes sense. i certainly don鈥檛 know what i鈥檓 doing, i just know that the way the code is currently written is very bad lol (multiple atoms that each have to be updated individually, spaghetti code in general, etc)

Noah Bogart19:11:42

cool, i鈥檒l try that out


@UEENNMX0T BTW, your state map looks really similar to the kind of graph-like map-structures that Joinery and friends are trying to optimize for: and - so if you're looking for a way to simplify your queries and mutations, perhaps you can use one of the ready-made libraries (or at least get some inspiration from their apis)

Noah Bogart19:11:46

thanks for the link! i鈥檒l check it out

Noah Bogart18:11:52

i first had this as an inlined function to swap! and then as a function in a let block before pulling it all the way out into its own function for clarity.

Alex Miller (Clojure team)18:11:38

register-game-impl is then pure and you can test it, spec it, whatever and the state is isolated in register-game!

Alex Miller (Clojure team)19:11:03

you might consider whether you even need register-game!. If you make your "transformation" functions all external, then you can sometimes keep the state local to the place where you create it and avoid even making stateful functions

鈽濓笍 1
Alex Miller (Clojure team)19:11:57

sometimes I'll play that game to try to minimize the number of places that even need to be aware there is state - can you reduce it to the point that all stateful code ends up on a single screen and you can see it together?

Noah Bogart19:11:24

hah that鈥檚 an interesting idea! certainly seems like it could be doable if I can architect it right. right now, app-state is an in-memory database because the game objects aren鈥檛 serializable (lots of stored functions) and the ! functions are being called from sente handlers (web socket messages) which makes returning a new app-state a little tricky but not impossible

Alex Miller (Clojure team)19:11:13

ah, maybe not possible then

Alex Miller (Clojure team)19:11:46

but that's the premise I start from - consider state and stateful functions to be radioactive and apply pressure to minimize their spread

馃憤 2
Ben Sless11:11:45

Unfortunately, I found that this can exert pressure which can escape via unexpected release valves, mainly falling into the trap pointed out in Out Of The Tar Pit of passing the entire world as an extra argument

Ben Sless11:11:08

(guilty of it myself, not throwing stones)

Andy Carlile20:11:59

I'm trying to set up a web app using the reagent template and updating it to use httpkit and sente. I started with lein new reagent myproject +cider +figwheel then followed to replace jetty with http-kit in order to set myself up with; The project compiles as long as one doesn't actually try to require sente in or if I try to (:require [taoensso.sente :as sente :refer (cb-success?)]) in the client, compilation fails with No such namespace: if i try to (:require [taoensso.sente :as sente]) in the handler, compilation breaks with java.lang.RuntimeException: No such var: str/starts-with? and if I try to (:require [taoensso.sente.server-adapters.http-kit :refer (get-sch-adapter)]) (also in the handler), compilation breaks with: java.lang.RuntimeException: No such var: hk/as-channel each of these errors appears to be way above my application level, and I've removed reference to sente outside of the require statements. Am I using the right sente version? could there be some other setup missing in my environment?


@UQZQ9T3NV these deps are ancient:

[org.clojure/clojure "1.7.0-RC1"]
[org.clojure/clojurescript "0.0-3291" :scope "provided"]
I would start by bumping these deps, especially since this is a new project


current Clojure is 1.10.3 and ClojureScript is 1.10.891 (or at least 1.10.773 as defined in sente deps)


Looks like concurrent code loading


something is loading clojure code from multiple threads which is not safe


Actually, it may just be a version conflict


Have you done a lein clean recently?


I wonder if adding httpkit changed some transient dependency versions and you have some stale class files aot compiled from the old versions

pithyless20:11:53 - well, for starters, looks like a really old version of Clojure and ClojureScript

Charlie Mead20:11:45

Here鈥檚 for the some-> macro:

(defmacro some->
  "When expr is not nil, threads it into the first form (via ->),
  and when that result is not nil, through the next etc"
  {:added "1.5"}
  [expr & forms]
  (let [g (gensym)
        steps (map (fn [step] `(if (nil? ~g) nil (-> ~g ~step)))
    `(let [~g ~expr
           ~@(interleave (repeat g) (butlast steps))]
       ~(if (empty? steps)
          (last steps)))))
My question is this: I鈥檓 confused about the benefit to using butlast and potentially saving a single binding. Why not go with a simpler body of:
`(let [~g ~expr
       ~@(interleave (repeat g) steps)]