Fork me on GitHub
#clojure-dev
<
2016-06-22
>
hiredman01:06:03

Why did the core.async threadpool go from 42 to 8, and why isn't it using the core count?

danielcompton04:06:14

It does seem like the kind of thing that should be noted a bit brighter in the changelog

alexmiller11:06:26

I think the far more important thing is that it's now configurable

alexmiller11:06:55

It was noted in the announcement email and the read me and is at the top of the apidocs (how to configure) - in what "brighter" way could it have been mentioned?

mpenet13:06:54

the same should be done for thread macro executor

mpenet13:06:41

sadly this only allows to have 1 setup for a whole jvm instance (you can't have different pools for different contexts)

mpenet13:06:41

also if a piece of code changes that limit before you do, no way to change anything (since it's trapped in a defonce+delay)

dm313:06:42

I think what ztellman did with Manifold is a good example of a configurable execution policy: https://github.com/ztellman/manifold/blob/master/docs/execution.md

mpenet13:06:37

That's what I modeled my changes after in the jira issue mentionned (I am not saying the code is acceptable, but the approach/idea seems more flexible)

ghadi15:06:58

Regarding invokedynamic: While thankfully Rich isn't driven by features, it may be time to re-evaluate invokedynamic, as the implementation has stabilized greatly. Regarding JRuby's woes with indy: I'm less persuaded by the fact that they had legit troubles with it because Ruby's dynamism is on another level than Clojure's (e.g. method lookup chain). When I first started looking at indy, I initially thought that opportunities for Clojure would include higher peak performance (and there are still improvements in that area), my current thoughts center surprisingly around improving startup time. http://mail.openjdk.java.net/pipermail/mlvm-dev/2014-August/005890.html (required reading from JVM architect John Rose.) > ... we expect the overhead of a method handle to be more like the overhead of a closure and less like the overhead of an inner class. This means that you will win if you "compress" your call sites using indy instead of open-coding them with many bytecodes. The warm-up will process the compact form, leading to less net data motion for loading, while the hot form of the code will be indistinguishable from the open-coded version. - Faster/lazier init of clojure's "constant pool" (aka the <clinit> method) - Get rid of "bytecode boilerplate" for keyword or protocol invokes, vector and map creation

ghadi15:06:22

A practical example of "open-coded bytecode" is a map expression: (fn [foo] {:a foo :b 42}) The bytecode creates an array, then loads :a onto the stack, puts it into the array, then does the same with foo :b 42, then passes the array to RT/map

ghadi15:06:53

A "compressed" callsite would just load all the elements onto the stack and call an indy instruction that is bootstrapped with a method handle that encapsulates rolling things into an array and calling a map constructor

ghadi15:06:20

bootstrapping a methodhandle happens lazily and only once on first invocation

seancorfield16:06:58

Interesting that indy’s benefits have changed in that way. Thanks @ghadi !

ragge18:06:18

@ghadi: thanks for that info... I've been investigating (very early still) a lazy loading approach similar to what the fastload branch achieved, but using invokedynamic and constant call sites

ragge18:06:26

it's really mostly for my own sake, learning more about invoke dynamic (and the clojure compiler obvs), but started to wonder if indy was at all under consideration

danielcompton20:06:53

@alexmiller: Maybe just putting a Breaking: in the changelog?

stuarthalloway21:06:31

@danielcompton: I agree the change is important, but we should reserve “breaking” for places where documented semantics are broken. Is there a documented semantic that changed?

alexmiller21:06:32

Note that the setting is the max pool size so it should have no effect on apps that never have more than 8 concurrent go blocks

alexmiller21:06:13

Note also that this change does not preclude adding more pool control in the future

alexmiller21:06:41

Aka no is temporary, yes is forever

ghadi21:06:47

nothing broke (as usual)

ghadi21:06:03

alexmiller: the maintainer's creed

alexmiller21:06:40

@mpenet Re the thread function, that doesn't use a pool at all and is just a convenience. You can spin up your own threads and do async things however you like.

alexmiller21:06:07

So is far less important than the go thread pool imo

danielcompton21:06:57

Perhaps Important then? Lowering the thread count in the thread pool seems like it could break/diminish effectiveness of peoples code where they were relying on there being more threads in the pool. https://groups.google.com/d/msg/clojure/EDkrj-45vb4/VLNONJW5AQAJ

mpenet22:06:41

@alexmiller: sure, thread issue is not very important. It's easy enough copy paste the code and fix the issue by ourself. Then again I would consider using an unbounded thread pool kinda unusable personnally. It's ok for toy stuff, but not usable otherwise. But I couldn't care less as long as chan ops and go blocks get more knobs first

mpenet22:06:00

also it's worth noting pipeline uses thread internally.

hiredman22:06:27

8 just seems like a really weird number to pick out of a hat, not that 42 wasn't weird, but I would expect numbers to become more meaningful over time, or become self tuning, 42 -> 8 just seems like trading one arbitrary number for another, my guess is it isn't arbitrary, and the reason 8 is more meaningful than 42 is something I don't know, so for my own edification it would be nice to know

danielcompton22:06:14

@hiredman: also, it was 42 + core count * 2 which is somewhat self tuning, whereas 8 is fixed

alexmiller23:06:55

There is no right number. The important part is that it's configurable.