Fork me on GitHub
#core-async
<
2017-09-03
>
noisesmith04:09:44

go blocks are for coordination of asynchronous state, you'll want something else to contain the actual work. core.async/thread returns a channel that eventually gets the return value of the code in the thread, so it integrates with go blocks perfectly

didibus05:09:04

@noisesmith I guess, I either don't have the use case to see the benefits, or don't understand how to model things that way. If I have heavy computation, threads are still the way to go. If I have blocking IO, threads are still the way to go. If I have small computation, single threaded is best, unless I have tons of small computation, then fork/join might be better. So it leaves coordinating async IO like you said. But I rarely need so much coordination to justify this, like one channel is almost always enough. And async IO is really just threaded IO wrap in a future at a lower level. Am I missing something? Or i really just don't have the use case that fits?

noisesmith05:09:03

if you don't need coordination you don't need core.async, but most of the time if two things are not strictly ordered but you need to use them both, some sort of coordination is eventually needed, and beyond a certain complexity, core.async makes that a lot simpler

noisesmith05:09:37

it's also possible to have interrupt driven async IO that's not based on a thread or future (at least not on a level the VM ever sees)

noisesmith05:09:11

for me it's never a "threads or async" question - I use threads if things need to be parallel, and core.async if I need to do anything more complex than a simple map-reduce

noisesmith05:09:59

so it's either just threads, or threads plus core.async together

didibus18:09:38

@noisesmith Right, so you use core.async for more complex data transformation pipelines that you want done in parallel. An in between full distributed map/reduce and reducers/fold. I think I've seen uses of it that are exaggerated then, like parallel requests to fetch data just need a few futures.

noisesmith18:09:46

Right. Though core.async also helps if you need to ensure you don't kill everything by starting too many futures at once.

didibus19:09:12

Does it? How so, that's actually why I looked at it initially, I was going with a bounded executor, can core.async offer bounds on the number of threads and the backing queue size?

noisesmith19:09:09

channels are customizable queues, there are built in core.async functions that let you control parallelism, plus the simple technique of starting N go-loops in a doseq, and each one parking on a single thread at a time (all reading from the same input channel)

noisesmith19:09:51

and yes, you can decide how large a buffer to put on a channel, if that part was unclear