This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-03-20
Channels
- # bangalore-clj (1)
- # beginners (145)
- # boot (8)
- # braid-chat (2)
- # capetown (2)
- # cider (27)
- # cljs-dev (232)
- # cljsrn (30)
- # clojure (223)
- # clojure-boston (1)
- # clojure-dusseldorf (2)
- # clojure-greece (1)
- # clojure-italy (21)
- # clojure-russia (16)
- # clojure-sanfrancisco (13)
- # clojure-spec (33)
- # clojure-uk (56)
- # clojurescript (165)
- # core-async (16)
- # core-logic (5)
- # cursive (14)
- # data-science (2)
- # datavis (2)
- # datomic (49)
- # duct (15)
- # editors (5)
- # emacs (6)
- # fulcro (11)
- # graphql (11)
- # hoplon (8)
- # jobs (4)
- # jobs-discuss (82)
- # jobs-rus (7)
- # leiningen (4)
- # luminus (5)
- # off-topic (90)
- # om (7)
- # om-next (1)
- # parinfer (67)
- # pedestal (34)
- # portkey (46)
- # re-frame (12)
- # reagent (4)
- # reitit (3)
- # remote-jobs (1)
- # ring-swagger (8)
- # shadow-cljs (13)
- # spacemacs (18)
- # specter (6)
- # sql (5)
- # tools-deps (4)
- # unrepl (40)
- # yada (26)
if I have multiple threads putting items onto the same channel, and that channel is read in one place from only a single go block, can safely assume that the items will always be pulled off the channel in the same order they are put on it?
it depends on the buffer (if any) but the standard buffers are all fifo, yes
the issue arised when reading messages from a websocket; the client reads on a background thread so as to not tie up the main thread with listening. it is important that i process socket messages in order, and if the socket receiver directly dispatches to the parsing functions, some of them might take awhile, thus a new message might get processed while an older one is still being processed. so now i just immediately dump all received messages onto a channel and then process them in a go block to ensure that they complete processing in order. does that sound about right?
if order of evaluation is more important than speed of processing, yes
if your processing on the client side takes a long time so you can't keep up with messages coming in from the server you might want to add some kind of back pressure to throttle the server
Hi everybody, I'm using core.async to parallelize some work. Essentially I have two channels (`A` and B
) and three worker functions (`fn1`, fn2
, and fn3
). fn1
produces messages to A
using a blocking put. I spin up 8 different "instances" of fn2
which perform blocking gets on A
, do some work with the message, and then write a result to B
. fn3
performs blocking gets on B
and then does some work. So overall, my flow looks like this fn1->A->pool of fn2->B->fn3
. After fn1
writes the final input to A
, I close A
and fn1
exists. After a fn2
instance reads from A
and notices that the channel is closed, it exits. What's the best strategy for closing B
and exiting fn3
given that I can only close B
after all instances of fn2
have exited?
My initial thought was to just move the work from fn3
into fn2
, except fn3
must be thread safe
I was thinking of using an atom to track the number of open / running instances of fn2
and closing B
after all instances of fn2
have exited, but wasn't sure if there was a better way to do this. Any ideas?
stephenmhopper: pipeline-blocking is an abstraction that simplifies this kind of thing
you explicitly tell it what function to run and how many parallel threads to spin up running that function
Oh cool. I didn't know that existed. I'll take a look. Thank you!
@stephenmhopper also take a look at this library in your evaluations: https://github.com/TheClimateCorporation/claypoole
Came across it (appears to be a respected tool) when looking for best abstractions in parallel workflows
@ajs okay, thank you, I'll take a look at that