Fork me on GitHub

something like:

(defn select-ns-keys [m namespace']
  (into {}
        (keep (fn [[k v]]
                (when (= namespace'
                         (namespace k))
                  [k v])))

(select-ns-keys {:a/key 1
                 :a/val 2
                 :b/key 3
                 :b/val 4}
;; => #:b{:key 3, :val 4}


can be done more efficiently with reduce-kv probably if you care


what kind of simple solutions would you propose to distribute work amongst 64 worker threads ? right now i'm using pmap for comfort but i need more parallelism ( i'm calling a blocking api and other side is slow on latency, but can do a bunch of ops in parallel)


I'd start really simple. An arrayblockingqueue and a bunch of Threads. Then if I wanted "more powa" use an Executor. Pmap is not likely ideal for throughput.

👍 1

claypoole has e.g pmap where you can specify your own thread pool.


just rolling my own copy pmap with a variable number instead of the fixed cpu core related one ?


will give it a shot if my pmap fork should fail me 🙂


why there's no spec for defprotocol and defrecord?


Executors are designed for this - there’s no reason not to just start there


indeed, this turned out easier than expected with the help of reify , callable and mapping deref on the results


my remote workers that i invoke over the api do all the heavy lifting, i just need to tell them what to do and wait for their rather compact output (i literally tell them 3 strings to operate on and they haul away for 4-5 seconds before they reply), so performance at this point is not important, i just need more parallelism than 10 🙂. the interface of pmap is very comfortable and fits my usecase well.


I will probably roll with the Executors, they will satisfy my needs


i expect the spread to be better when the work is something other than "just return that value as a string" 🙂


yep, even throwing a little sleep into the callable spreads the work out nicely


Is it by design that binding doesn't act like other let-like macros?

Clojure 1.10.1
user=> (def ^:dynamic x)
user=> (def ^:dynamic y)
user=> (binding [x "hello" y x] y)
#object[clojure.lang.Var$Unbound 0x58294867 "Unbound: #'user/x"]


binding is not like let… bindings are done in parallel


yes, but why?


(As an aside, there was a bug surrounding this in ClojureScript until 1.10.439)


It would be interesting to dig up the rationale for parallel binding. (I don’t know it offhand.)


I do not know the full rationale, but part of it might be that the things being bound via binding are all expected to already be def'd globally earlier, before the binding was eval'd, but let bound names can be purely local there, not existing anywhere else in the code.


That's more of a maybe-hint-at-the-reason than an answer to your question


it looks like binding uses var on each thing before calling push-thread-bindings -- is that perhaps why things being bound need to be def'd before?


Looks like the existing semantics were being established / documented in 2009


why is the stringwriter empty in this example?

$ clj -e "(def w ( (push-thread-bindings {#'*out* w}) (try (println \"hello\") (finally (pop-thread-bindings))) (prn \">\" (str w))"
">" ""
I'm trying to debug a problem with a custom binding macro


oh wait, the empty let in binding is actually doing something?

$ clj -e "(def w ( (let [] (push-thread-bindings {(var *out*) w}) (try (println \"hello\") (finally (pop-thread-bindings))) (prn \">\" (str w)))"
">" "hello\n"


I don't know exactly how that is behaving, but probably has something to do with how Java method pushThreadBindings uses the variables dval in


when defining a record that implements a protocol, is it general practice to access the keys via the defrecord argument signature, and only use the this as part of the protocol definition for doing assoc/update?


The (let [] ...) in the defmacro of bindings appears to a casual reader as if it could be replaced with (do ...) and everything would behave the same, but your code examples above, and the care with which most of Clojure's implementation is written, leads me to believe the let is significant there somehow.


It is, because each form in a top level do becomes a separate compilation unit


Let causes it to all be compiled and run as a unit


There is no explicit top level do in borkdude's examples. Is there an implicit one?


@theeternalpulse I've been told (by Alex, and maybe others) that it is both idiomatic and faster to reference the record's fields directly by their declared name, rather than use (:field this)


The behavior without the let is equivalent to the behavior of a top level do


So now I only use this in the protocol definition syntax and when manipulating the record as a whole.


(and I often use _ if the body doesn't need to reference this)


Is “compilation unit” a technical term in the compiler or just a way to say they might not share thread bindings or something?


Interestingly when implementing a Clojure interpreter I also chose to handle forms in do as separate compilation units. I'd like to hear more about this as well. Is the body in a let not handled as a do block?


I am not sure why the binding is behaving that way, and not at a real computer to investigate


I just know why let gets you different behavior


The compiler (and hence the eval in the repl) handle a top level form at a time


user=> (let [] (def x 1) (def y z))
Syntax error compiling at (REPL:1:19).
Unable to resolve symbol: z in this context
user=> x
#object[clojure.lang.Var$Unbound 0x29ea78b1 "Unbound: #'user/x"]
user=> (do (def x 1) (def y z))
Syntax error compiling at (REPL:1:15).
Unable to resolve symbol: z in this context
user=> x


I thought let had an implicit do and would be handled the same as if you had a real do


But it is a do still wrapped in a let


The do isn't top level


so first the entire let expression is analyzed, if there are any errors like unresolved symbols, none of the side effects will have happened


maybe I should revisit that behavior in my clojure interpreter..


It is complicated, because there are compile time and evaluation time side effecst


So for example, at compile time in your example, the var x was created, but because of compilation errors, the code was never run, which is why x is unbound


Runtime is a better word, not evaluation time


yeah. right now I handle every do the same, even inside a let expression, so maybe I'd have to revisit


Perhaps an aside-type detail: There are Clojure/JVM macros that cause mutations during macro-expansion. Fun!


maybe this isn't that well defined, just an implementation detail?


if there's docs about this, I'd like to see them


Google the gilardi scenario


Not "official" docs, but worth reading for some understanding of existing behavior.


ok, so breaking up do in separate compilation units was already a change


I woudn't be surprised if some/all of that is different in detail for ClojureScript, and/or not applicable somehow.


but have no detailed knowledge of ClojureScript in that area to say for sure. Just something to be mindful of if you care about cross-platform code.


yeah. I had some difficulties when writing tools around spec related to the order how things are analyzed in CLJS (e.g. try / catch), because analysis and macroexpansion also have side effects (e.g. in the spec implementation)


there was something like: the finally block was analyzed before the body


I know they have a lot more in common than they have different, but I suspect someone getting into these level of details of Clojure/JVM and ClojureScript would have a deep appreciation of a statement like "The United States and Great Britain are two countries separated by a common language." ( )


Also, thinking about it a bit more, perhaps a rationale for binding being 'parallel binding' rather than sequential like let is, is because making it sequential would probably require pushing N binding frames onto the stack, rather than only one, and then popping N off when the scope of the binding was exited. let doesn't create any kind of run-time stack in its implementation, and the sequential behavior is so very very useful. Common Lisp has parallel let and sequential let*, and I know Rich mentions explicitly in an early talk on Clojure to Common Lisp audience that Clojure's let is CL let*, by explicit choice.


So let is sequential because it doesn't cost any more implementation-wise to make it sequential, and it is so useful for it to be sequential in many contexts, and binding being parallel is perhaps due to efficiency concerns.


makes sense!


I noticed when implementing binding myself, I did the parallel implementation first, just because it was easier. Maybe that's how let got into Common Lisp, because of laziness on the implementor's behalf 😉


Unless someone wrote it up in a history of Lisp paper somewhere, that decision is probably over 50 years old now


It seems like to get to the origin of Common Lisp's let vs. let* and why they both exist, one would have to go back to the several popular flavors of Lisp that existed before Common Lisp was created, and see what they had. I suspect several of those prior Lisp flavors had both, and some may have had only one. I would guess that there were some very old Lisp implementations where let* was more expensive at run time than let, by at least enough for efficiency-minded programmers to care.


Funnily enough I made the same mistake as CLJS: