Fork me on GitHub
#core-async
<
2018-03-20
>
ajs15:03:39

if I have multiple threads putting items onto the same channel, and that channel is read in one place from only a single go block, can safely assume that the items will always be pulled off the channel in the same order they are put on it?

noisesmith15:03:14

it depends on the buffer (if any) but the standard buffers are all fifo, yes

ajs15:03:31

the issue arised when reading messages from a websocket; the client reads on a background thread so as to not tie up the main thread with listening. it is important that i process socket messages in order, and if the socket receiver directly dispatches to the parsing functions, some of them might take awhile, thus a new message might get processed while an older one is still being processed. so now i just immediately dump all received messages onto a channel and then process them in a go block to ensure that they complete processing in order. does that sound about right?

noisesmith15:03:01

if order of evaluation is more important than speed of processing, yes

hiredman16:03:02

if your processing on the client side takes a long time so you can't keep up with messages coming in from the server you might want to add some kind of back pressure to throttle the server

ajs16:03:11

Not my server

ajs16:03:54

I assume you mean if I also was in control of the websocket server

stephenmhopper16:03:04

Hi everybody, I'm using core.async to parallelize some work. Essentially I have two channels (`A` and B) and three worker functions (`fn1`, fn2, and fn3). fn1 produces messages to A using a blocking put. I spin up 8 different "instances" of fn2 which perform blocking gets on A, do some work with the message, and then write a result to B. fn3 performs blocking gets on B and then does some work. So overall, my flow looks like this fn1->A->pool of fn2->B->fn3. After fn1 writes the final input to A, I close A and fn1 exists. After a fn2 instance reads from A and notices that the channel is closed, it exits. What's the best strategy for closing B and exiting fn3 given that I can only close B after all instances of fn2 have exited?

stephenmhopper16:03:31

My initial thought was to just move the work from fn3 into fn2, except fn3 must be thread safe

stephenmhopper16:03:24

I was thinking of using an atom to track the number of open / running instances of fn2 and closing B after all instances of fn2 have exited, but wasn't sure if there was a better way to do this. Any ideas?

noisesmith16:03:37

stephenmhopper: pipeline-blocking is an abstraction that simplifies this kind of thing

noisesmith16:03:18

you explicitly tell it what function to run and how many parallel threads to spin up running that function

stephenmhopper16:03:40

Oh cool. I didn't know that existed. I'll take a look. Thank you!

ajs16:03:39

@stephenmhopper also take a look at this library in your evaluations: https://github.com/TheClimateCorporation/claypoole

ajs16:03:47

Came across it (appears to be a respected tool) when looking for best abstractions in parallel workflows

stephenmhopper17:03:42

@ajs okay, thank you, I'll take a look at that