This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-09-14
Channels
- # 100-days-of-code (4)
- # announcements (1)
- # beginners (63)
- # boot (22)
- # braveandtrue (104)
- # calva (3)
- # cider (12)
- # cljs-dev (53)
- # cljsjs (3)
- # cljsrn (1)
- # clojure (180)
- # clojure-dev (14)
- # clojure-italy (4)
- # clojure-nl (11)
- # clojure-spec (15)
- # clojure-uk (60)
- # clojure-ukraine (1)
- # clojurescript (118)
- # clojutre (3)
- # core-async (12)
- # core-logic (17)
- # cursive (19)
- # datomic (45)
- # devcards (4)
- # emacs (7)
- # figwheel-main (218)
- # fulcro (27)
- # funcool (3)
- # graphql (1)
- # jobs (4)
- # leiningen (57)
- # off-topic (71)
- # pedestal (2)
- # portkey (17)
- # re-frame (5)
- # reitit (4)
- # remote-jobs (2)
- # ring (11)
- # rum (2)
- # shadow-cljs (14)
- # specter (11)
- # sql (34)
- # tools-deps (23)
Hi, I'm using core async for coordinating IO and have doubts about my architecture. My app offers collaborating editing using operational transforms on documents (think Google docs). One go-loop reads all messages being sent by clients over websocket. It keeps a map of document-id -> chan and forwards them. Each document-chan is consumed by a go-loop that receives the messages, updates the state of the document (transforming the message if necessary), persists the messages in the db (blocking!), and forwards them to the other clients editing the same document. Clients only have one message in flight at a time and latency is not a huge priority – I mainly want to avoid everything grinding to a halt. Would this setup work for a larger number of documents being edited? Alternatively, I could keep a fixed number N of go-loops, that each handle M/N documents. Or is there an entirely different solution?
IO inside go blocks will break at medium scale, the fixed number of threads are low enough that you could lock all core.async up hard with less than ten documents
this is the reason clojure.core.async/thread
exists, it uses a thread in an expandable pool (as opposed to the fixed pool go blocks use) and returns a channel that you can properly park on (rather than blocking on IO)
because of core.async's back pressure it is very easy to get in to a situation where a slow consumer is blocking things for everyone else
so, instead of go-loop I use async/thread and loop and then things should work until I hit the thread limit, correct?
you can call and park on thread inside go-loop
and you are very unlikely to hit any thread limit if you do that (unless you mean your OS limit for usable threads)
ah, so I create a go-loop for each doc, but then do the message handling (with the blocking IO) inside a thread call?
yeah - that's how I would do it at least
(go-loop [... ...] (let [msg (<! doc-chan)] (when (some? msg) (let [foo (<! (thread ...))] ... (recur ...)))))
- something like this
gotcha, that makes perfect sense to me... thanks, noisesmith!
I treat (<! (thread ...))
as my placeholder for anything that might block, I rarely want to call some blocking operation inside a go block without doing a park on it immediately