This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (9)
- # beginners (112)
- # boot (223)
- # cbus (10)
- # cider (19)
- # clara (2)
- # cljs-dev (81)
- # cljsjs (3)
- # cljsrn (45)
- # clojure (239)
- # clojure-conj (12)
- # clojure-poland (2)
- # clojure-russia (56)
- # clojure-taiwan (1)
- # clojurescript (57)
- # cursive (28)
- # datomic (5)
- # events (14)
- # immutant (1)
- # jobs (1)
- # ldnclj (8)
- # off-topic (28)
- # om (80)
- # onyx (121)
- # re-frame (10)
- # sneer-br (1)
- # spacemacs (40)
- # yada (44)
@mfikes: right, that’s interesting. I guess we’re entering into novel territory since this isn’t something that Clojure itself has ever really supported.
@dnolen: Yeah. ClojureScript is exploring new stuff. I'll keep looking. Agents don't block each other that frequently. (A few hundred times over a few minutes.) It could be anything.
I’m thinking it is not lock contention accessing meta. This didn’t affect things: https://github.com/mfikes/clojure/commit/15d6a3212ff467987bee1cafc9b740f2f3a9fc90
@martinklepsch: Neat! thanks for all your work on this, what did you have to do to get the minimal case to show up?
@spinningtopsofdoom: I think there was some change when test.chuck moved to .cljc that broke the behaviour collection-check was relying on. tbh I don’t exactly remember anymore 😄
@mfikes: Thanks for testing it in Planck, when the collection check gets up and running I'll ping you for a second confirmation
@spinningtopsofdoom: gfredericks wants to move the shrunk reporting into test.check proper so it’s probably going to take some time until this ships in proper releases
@spinningtopsofdoom: Yeah… it would be very cool to see if all of that can run in a bootstrapped environment. That would be awesome.
@dnolen: You're simple perf improvements brought Lean map to around + (1 to 2) % of current CLJS spped . I just pushed a simple benchmark build.
@dnolen: I could do that. If you are familiar with the Threads view in YourKit, it shows a colored bar representing a timeline of each thread (we essentially see a bar per agent), and each bar is solid “Runnable” state. So, no lock contention.
@dnolen: I added some time debug-prn, and with one core it compiles each ns in about 2000 ms, and then as you increase agents each ns slows down to about 6000 ms or so.
@dnolen: So, something is slowing down individual compiles. I’m gonna see if I can narrow it down to I/O bound, or perhaps even memory bandwidth bound. Something is going on
@mfikes: and how does GC look? Usually when you’re doing this much concurrent work you need to provide a lot more memory.
Yeah… I maybe I can add a watch on the atom or somesuch… (Never tried to debug an “excessive retry” atom issue)
@mfikes: well watch wouldn’t show you write contention since that only fires on success
instead of all threads banging on
*compiler* they copy it’s contents and only do one swap at the end of the file.
(let [orig *compiler*] (binding [*compiler* (atom @*compiler*)] … (reset! orig *compiler*)))
@mfikes yeah not so informative that one, compiler is dominated by keyword lookups for the obvious reasons
Here is a revised
compile-task Seems to run at the same speed as before: https://gist.github.com/mfikes/e8c48b177170ccde7a6a
yeah my guess would be the CAS is the issue, but not contention just the compare part
Need to get David a new trashcan Mac Pro. With an iMac, since things scale linearly out to 4 to 6 cores, most people will never see this.
Here is the contention graph that I’m seeing by the way: https://raw.githubusercontent.com/mfikes/fifth-postulate/master/speedup.jpg
We'l swap merge or whatever :) the idea is to reduce contention to once per file instead of per def
@dnolen a lot of files compile in less than that, it would probably slow things down
@mfikes: yeah no reason it should since we make as many agents as the fixed agent pool
@dnolen: Changed sleep from 10 to 100. No diff. (Probably because I have a flat set of independent namespaces.)
Here is a bit of interesting information: The first “wave” of compiles consistently all take about 11,500 ms (per compile) and then all the subsequent ones take 7100 ms.
@dnolen I turned off
:source-map and no diff. I can copy the entire output
target dir to a new one in milliseconds.
(This is with a partial build in place, but illustrates not much I/O probably.)
$ find target | wc -l 463 orion:fifth-postulate mfikes$ time cp -r target target2 real 0m0.143s user 0m0.006s sys 0m0.129s