Fork me on GitHub

Why did the core.async threadpool go from 42 to 8, and why isn't it using the core count?


It does seem like the kind of thing that should be noted a bit brighter in the changelog

Alex Miller (Clojure team)11:06:26

I think the far more important thing is that it's now configurable

Alex Miller (Clojure team)11:06:55

It was noted in the announcement email and the read me and is at the top of the apidocs (how to configure) - in what "brighter" way could it have been mentioned?


the same should be done for thread macro executor


sadly this only allows to have 1 setup for a whole jvm instance (you can't have different pools for different contexts)


also if a piece of code changes that limit before you do, no way to change anything (since it's trapped in a defonce+delay)


I think what ztellman did with Manifold is a good example of a configurable execution policy:


That's what I modeled my changes after in the jira issue mentionned (I am not saying the code is acceptable, but the approach/idea seems more flexible)


Regarding invokedynamic: While thankfully Rich isn't driven by features, it may be time to re-evaluate invokedynamic, as the implementation has stabilized greatly. Regarding JRuby's woes with indy: I'm less persuaded by the fact that they had legit troubles with it because Ruby's dynamism is on another level than Clojure's (e.g. method lookup chain). When I first started looking at indy, I initially thought that opportunities for Clojure would include higher peak performance (and there are still improvements in that area), my current thoughts center surprisingly around improving startup time. (required reading from JVM architect John Rose.) > ... we expect the overhead of a method handle to be more like the overhead of a closure and less like the overhead of an inner class. This means that you will win if you "compress" your call sites using indy instead of open-coding them with many bytecodes. The warm-up will process the compact form, leading to less net data motion for loading, while the hot form of the code will be indistinguishable from the open-coded version. - Faster/lazier init of clojure's "constant pool" (aka the <clinit> method) - Get rid of "bytecode boilerplate" for keyword or protocol invokes, vector and map creation


A practical example of "open-coded bytecode" is a map expression: (fn [foo] {:a foo :b 42}) The bytecode creates an array, then loads :a onto the stack, puts it into the array, then does the same with foo :b 42, then passes the array to RT/map


A "compressed" callsite would just load all the elements onto the stack and call an indy instruction that is bootstrapped with a method handle that encapsulates rolling things into an array and calling a map constructor


bootstrapping a methodhandle happens lazily and only once on first invocation


Interesting that indy’s benefits have changed in that way. Thanks @ghadi !


@ghadi: thanks for that info... I've been investigating (very early still) a lazy loading approach similar to what the fastload branch achieved, but using invokedynamic and constant call sites


it's really mostly for my own sake, learning more about invoke dynamic (and the clojure compiler obvs), but started to wonder if indy was at all under consideration


@alexmiller: Maybe just putting a Breaking: in the changelog?


@danielcompton: I agree the change is important, but we should reserve “breaking” for places where documented semantics are broken. Is there a documented semantic that changed?

Alex Miller (Clojure team)21:06:32

Note that the setting is the max pool size so it should have no effect on apps that never have more than 8 concurrent go blocks

Alex Miller (Clojure team)21:06:13

Note also that this change does not preclude adding more pool control in the future

Alex Miller (Clojure team)21:06:41

Aka no is temporary, yes is forever


nothing broke (as usual)


alexmiller: the maintainer's creed

Alex Miller (Clojure team)21:06:40

@mpenet Re the thread function, that doesn't use a pool at all and is just a convenience. You can spin up your own threads and do async things however you like.

Alex Miller (Clojure team)21:06:07

So is far less important than the go thread pool imo


Perhaps Important then? Lowering the thread count in the thread pool seems like it could break/diminish effectiveness of peoples code where they were relying on there being more threads in the pool.


@alexmiller: sure, thread issue is not very important. It's easy enough copy paste the code and fix the issue by ourself. Then again I would consider using an unbounded thread pool kinda unusable personnally. It's ok for toy stuff, but not usable otherwise. But I couldn't care less as long as chan ops and go blocks get more knobs first


also it's worth noting pipeline uses thread internally.


8 just seems like a really weird number to pick out of a hat, not that 42 wasn't weird, but I would expect numbers to become more meaningful over time, or become self tuning, 42 -> 8 just seems like trading one arbitrary number for another, my guess is it isn't arbitrary, and the reason 8 is more meaningful than 42 is something I don't know, so for my own edification it would be nice to know


@hiredman: also, it was 42 + core count * 2 which is somewhat self tuning, whereas 8 is fixed

Alex Miller (Clojure team)23:06:55

There is no right number. The important part is that it's configurable.