This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-06-18
Channels
- # admin-announcements (10)
- # boot (260)
- # cider (44)
- # clojure (226)
- # clojure-berlin (24)
- # clojure-dev (68)
- # clojure-germany (5)
- # clojure-india (14)
- # clojure-italy (3)
- # clojure-japan (21)
- # clojure-poland (34)
- # clojure-russia (20)
- # clojure-spain (2)
- # clojure-uk (8)
- # clojurescript (86)
- # core-async (38)
- # core-typed (70)
- # datomic (41)
- # docs (8)
- # editors (7)
- # euroclojure (6)
- # instaparse (2)
- # jobs (8)
- # ldnclj (47)
- # om (17)
- # other-lisps (1)
- # reactive (1)
- # reading-clojure (8)
- # reagent (13)
- # sneer (1)
- # sneer-br (1)
I have a periodic job that runs to get the status of a set of package shipments from various couriers (dhl, fedex, etc)
so I create a set of channels, issue the api call and put the returned result on the channel
I then core.async/merge those channels and then use alts! in a go-loop to collect all the results
Here's my wait function
(defn wait-values [wait channels]
(let [all (merge channels)
t-out (timeout wait)]
(go-loop [values []]
(let [[value _] (alts! [all t-out])]
(if (nil? value)
values
(recur (conj values value)))))))
Now this is where my understanding of core.async perhaps isn't as good as it needs to be: how would this scale?
It's probably fine if I have a small number of shipments, but what about when the number of requests and channels gets large when there are 100's or 1000's of shipments to be tracked?
also, I do think that Timothy Baldrigde has a demo in here https://www.youtube.com/watch?v=enwIIGzhahw where he shows usage of 1000 channels
@matttylr: if you want to introduce parallelism you may consider using a pipeline instead of a single go-loop
This must be a common pattern for core.async usage I just wasn't succeeding in finding any good examples
well, the moment you’re creating that final list, you’ll have to do it in 1 process right?
but if you want to do processing on the values of the list, you might consider doing that before you create the list
there's the fanout of the queries, the processing of the results and collation of the derived data
typically you also want to control the parallelism in the api calls by issuing them from a pipeline-async and then send the results to a pipeline with a transducer that performs the xpath queries