This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-11-16
Channels
- # aleph (1)
- # announcements (3)
- # babashka (24)
- # beginners (333)
- # clj-kondo (36)
- # cljs-dev (11)
- # clojure (75)
- # clojure-italy (3)
- # clojure-uk (15)
- # clojurescript (31)
- # core-logic (18)
- # cursive (2)
- # data-science (3)
- # datomic (1)
- # events (1)
- # fulcro (13)
- # graalvm (2)
- # jobs (1)
- # kaocha (2)
- # malli (1)
- # overtone (6)
- # re-frame (7)
- # reagent (17)
- # rewrite-clj (3)
- # shadow-cljs (10)
- # sql (9)
- # vim (1)
something like:
(defn select-ns-keys [m namespace']
(into {}
(keep (fn [[k v]]
(when (= namespace'
(namespace k))
[k v])))
m))
(select-ns-keys {:a/key 1
:a/val 2
:b/key 3
:b/val 4}
"b")
;; => #:b{:key 3, :val 4}
can be done more efficiently with reduce-kv probably if you care
what kind of simple solutions would you propose to distribute work amongst 64 worker threads ? right now i'm using pmap for comfort but i need more parallelism ( i'm calling a blocking api and other side is slow on latency, but can do a bunch of ops in parallel)
I'd start really simple. An arrayblockingqueue and a bunch of Threads. Then if I wanted "more powa" use an Executor. Pmap is not likely ideal for throughput.
just rolling my own copy pmap with a variable number instead of the fixed cpu core related one ?
@kulminaator - claypoole may be an option: https://github.com/TheClimateCorporation/claypoole
will give it a shot if my pmap fork should fail me 🙂
I'd start really simple. An arrayblockingqueue and a bunch of Threads. Then if I wanted "more powa" use an Executor. Pmap is not likely ideal for throughput.
Executors are designed for this - there’s no reason not to just start there
indeed, this turned out easier than expected with the help of reify , callable and mapping deref on the results
my remote workers that i invoke over the api do all the heavy lifting, i just need to tell them what to do and wait for their rather compact output (i literally tell them 3 strings to operate on and they haul away for 4-5 seconds before they reply), so performance at this point is not important, i just need more parallelism than 10 🙂. the interface of pmap is very comfortable and fits my usecase well.
I will probably roll with the Executors, they will satisfy my needs
i expect the spread to be better when the work is something other than "just return that value as a string" 🙂
yep, even throwing a little sleep into the callable spreads the work out nicely
Is it by design that binding doesn't act like other let-like macros?
Clojure 1.10.1
user=> (def ^:dynamic x)
#'user/x
user=> (def ^:dynamic y)
#'user/y
user=> (binding [x "hello" y x] y)
#object[clojure.lang.Var$Unbound 0x58294867 "Unbound: #'user/x"]
It would be interesting to dig up the rationale for parallel binding. (I don’t know it offhand.)
I do not know the full rationale, but part of it might be that the things being bound via binding
are all expected to already be def'd globally earlier, before the binding
was eval'd, but let
bound names can be purely local there, not existing anywhere else in the code.
That's more of a maybe-hint-at-the-reason than an answer to your question
it looks like binding
uses var
on each thing before calling push-thread-bindings
-- is that perhaps why things being bound need to be def'd before?
Looks like the existing semantics were being established / documented in 2009 https://clojure.atlassian.net/browse/CLJ-152
why is the stringwriter empty in this example?
$ clj -e "(def w (java.io.StringWriter.)) (push-thread-bindings {#'*out* w}) (try (println \"hello\") (finally (pop-thread-bindings))) (prn \">\" (str w))"
#'user/w
hello
">" ""
I'm trying to debug a problem with a custom binding
macrooh wait, the empty let in binding
is actually doing something?
$ clj -e "(def w (java.io.StringWriter.)) (let [] (push-thread-bindings {(var *out*) w}) (try (println \"hello\") (finally (pop-thread-bindings))) (prn \">\" (str w)))"
#'user/w
">" "hello\n"
I don't know exactly how that is behaving, but probably has something to do with how Java method pushThreadBindings uses the variables dval
in Var.java
when defining a record that implements a protocol, is it general practice to access the keys via the defrecord argument signature, and only use the this as part of the protocol definition for doing assoc/update?
The (let [] ...)
in the defmacro of bindings
appears to a casual reader as if it could be replaced with (do ...)
and everything would behave the same, but your code examples above, and the care with which most of Clojure's implementation is written, leads me to believe the let
is significant there somehow.
There is no explicit top level do
in borkdude's examples. Is there an implicit one?
@theeternalpulse I've been told (by Alex, and maybe others) that it is both idiomatic and faster to reference the record's fields directly by their declared name, rather than use (:field this)
So now I only use this
in the protocol definition syntax and when manipulating the record as a whole.
(and I often use _
if the body doesn't need to reference this
)
Is “compilation unit” a technical term in the compiler or just a way to say they might not share thread bindings or something?
Interestingly when implementing a Clojure interpreter I also chose to handle forms in do as separate compilation units. I'd like to hear more about this as well. Is the body in a let not handled as a do block?
I am not sure why the binding is behaving that way, and not at a real computer to investigate
user=> (let [] (def x 1) (def y z))
Syntax error compiling at (REPL:1:19).
Unable to resolve symbol: z in this context
user=> x
#object[clojure.lang.Var$Unbound 0x29ea78b1 "Unbound: #'user/x"]
user=> (do (def x 1) (def y z))
Syntax error compiling at (REPL:1:15).
Unable to resolve symbol: z in this context
user=> x
1
I thought let had an implicit do and would be handled the same as if you had a real do
so first the entire let expression is analyzed, if there are any errors like unresolved symbols, none of the side effects will have happened
So for example, at compile time in your example, the var x was created, but because of compilation errors, the code was never run, which is why x is unbound
yeah. right now I handle every do
the same, even inside a let expression, so maybe I'd have to revisit
Perhaps an aside-type detail: There are Clojure/JVM macros that cause mutations during macro-expansion. Fun!
Not "official" docs, but worth reading for some understanding of existing behavior.
I woudn't be surprised if some/all of that is different in detail for ClojureScript, and/or not applicable somehow.
but have no detailed knowledge of ClojureScript in that area to say for sure. Just something to be mindful of if you care about cross-platform code.
yeah. I had some difficulties when writing tools around spec related to the order how things are analyzed in CLJS (e.g. try / catch), because analysis and macroexpansion also have side effects (e.g. in the spec implementation)
this was it: https://github.com/borkdude/respeced/blob/f5ff67aa78f588e7bad2a1b86dd1a646d3fdab3d/src/respeced/test.cljc#L18
I know they have a lot more in common than they have different, but I suspect someone getting into these level of details of Clojure/JVM and ClojureScript would have a deep appreciation of a statement like "The United States and Great Britain are two countries separated by a common language." ( https://en.wikiquote.org/wiki/English_language )
Also, thinking about it a bit more, perhaps a rationale for binding
being 'parallel binding' rather than sequential like let
is, is because making it sequential would probably require pushing N binding frames onto the stack, rather than only one, and then popping N off when the scope of the binding
was exited. let
doesn't create any kind of run-time stack in its implementation, and the sequential behavior is so very very useful. Common Lisp has parallel let
and sequential let*
, and I know Rich mentions explicitly in an early talk on Clojure to Common Lisp audience that Clojure's let
is CL let*
, by explicit choice.
So let
is sequential because it doesn't cost any more implementation-wise to make it sequential, and it is so useful for it to be sequential in many contexts, and binding
being parallel is perhaps due to efficiency concerns.
I noticed when implementing binding myself, I did the parallel implementation first, just because it was easier. Maybe that's how let got into Common Lisp, because of laziness on the implementor's behalf 😉
Unless someone wrote it up in a history of Lisp paper somewhere, that decision is probably over 50 years old now
It seems like to get to the origin of Common Lisp's let vs. let* and why they both exist, one would have to go back to the several popular flavors of Lisp that existed before Common Lisp was created, and see what they had. I suspect several of those prior Lisp flavors had both, and some may have had only one. I would guess that there were some very old Lisp implementations where let* was more expensive at run time than let, by at least enough for efficiency-minded programmers to care.
Funnily enough I made the same mistake as CLJS: https://github.com/borkdude/sci/issues/164