Fork me on GitHub
#core-async
<
2019-12-04
>
penryu09:12:21

Has anyone done any profiling on which part of core.async contributes most to the load time?

penryu09:12:39

It's not prohibitive, but it was enough that I refactored some code to use future instead of thread to avoid it.

yonatanel11:12:15

What was your thread doing and how many did you create?

penryu11:12:20

The delay happens at require time. Just adding the (:require...) clause to the ns declaration adds about a second of time.

penryu11:12:33

I'm sure there's plenty of stuff in async that makes that delay reasonable (if it's needed). I was just wondering if anyone had looked into what causes the delay.

dpsutton12:12:10

I think there’s a thread pool created on demand

alexmiller13:12:46

It’s the macro stuff, has nothing to do with the threads

alexmiller13:12:22

Ghadi actually has a speculative refactor that loads the go loop stuff on demand, avoiding the delay if you’re not using go’s but I’m not sure if that’s common enough to be worth the trouble

yonatanel15:12:58

So is that considered a bug? Rich even said in his Inside Transducers + more.async talk that the backpressure semantics change with expanding transducers like cat, but the buffer size is still bounded. With the current behavior it can grow indefinitely.