This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (6)
- # architecture (14)
- # babashka (28)
- # beginners (29)
- # calva (11)
- # clj-kondo (2)
- # clj-on-windows (1)
- # cljsrn (10)
- # clojure (116)
- # clojure-europe (5)
- # clojure-uk (1)
- # clojurescript (5)
- # cursive (9)
- # datomic (21)
- # depstar (1)
- # events (1)
- # fulcro (2)
- # graalvm (17)
- # graalvm-mobile (28)
- # helix (3)
- # introduce-yourself (2)
- # jobs (2)
- # lsp (4)
- # meander (1)
- # off-topic (4)
- # pathom (5)
- # polylith (6)
- # practicalli (5)
- # reagent (67)
- # reitit (1)
- # releases (2)
- # shadow-cljs (24)
- # tools-deps (23)
Thanks for the interesting responses everyone. Follow up, a lot of what I am seeing here in the discussion is related to perf and scaling. Has anyone found that the model for core.async encourages good system design? It seems to me one of the biggest benefits would be the easy de-coupling of subsystems, and effects (although I’m not sure if it’s worth it for a small project over regular sync function calls).
The ability to easily de-couple producer and consumer (for example
req -produces> command -consumes> command handlers -produces> effects) seems like a win, but I’m interested if anyone has any thoughts on this in practice 🙂.
Side note, this community has been the most thoughtful and welcoming I’ve come across and I’m stoked to be here.
I don't think the more "micro" uses of
chan - i.e. alts, as essentially promises - are necessarily the same kind of decoupling
I primarily think of clojure.core.async as a library that helps reason about programs that have multiple, independent, logical processes that must coordinate. compared to promises, futures, deferreds, j.u.concurrent, I find it easier to write correct programs that have lots of moving parts.
In addition to the bounded queues, clojure.core.async offers
alts! which doesn't have a counterpart in many of the other options.
I also think having a strong theoretical basis (http://usingcsp.com/cspbook.pdf) really starts to show when you do have to implement a problem that is inherently very complex and correctness is important.
Some other architectural honorable mentions to look out for when evaluating libraries that target concurrency: • are there parking/non parking versions for each operation? • is there a story for handling back pressure?
Another thing to keep in mind: Yes decoupling when something happens from where it happens is nice when you can afford it. Often, things need to be done now and in order. Tacking a bunch of queues in the middle of that sort of process will add significant complexity for little gain.
almost all modern webservers are an argument against that. handling an http request is something that happens "now and in order" so a traditional fork a thread per request would seem to make sense, but I think every high performance webserver written post this paper (https://people.eecs.berkeley.edu/~brewer/papers/SEDA-sosp.pdf) has stuck a bunch of queues in the middle of that
http://matt-welsh.blogspot.com/2010/07/retrospective-on-seda.html is a retrospective on the paper by an author a decade after the paper
Yeah, I mean. If you pull out the stopwatch and profiler and really dig into it, there are gains to be had.
But he pretty much summarizes my view in his retrospective: > I would only put a separate thread pool and queue in front of a group of stages that have long latency or nondeterministic runtime, such as performing disk I/O. But even with that, it’s fair to say that manually managing a queue adds a lot of complexity compared to having a reified stack (a la Loom).
(As an abstraction. It’s fine to have asynchrony and queues in an implementation if you still offer a stack.)