This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (3)
- # architecture (54)
- # babashka (11)
- # beginners (18)
- # calva (5)
- # clj-on-windows (1)
- # cljdoc (2)
- # cljs-dev (1)
- # cljsrn (6)
- # clojure (192)
- # clojure-europe (8)
- # clojurescript (21)
- # conjure (23)
- # core-async (4)
- # datomic (7)
- # depstar (77)
- # events (1)
- # fulcro (27)
- # lsp (88)
- # malli (5)
- # meander (1)
- # off-topic (4)
- # pathom (43)
- # polylith (39)
- # re-frame (9)
- # shadow-cljs (14)
- # timbre (3)
- # tools-deps (53)
Where does core.async fit in with web applications? At what point would you decide to use it? Where has it helped solve a problem in your application?
Pushing out changes to various clients via websockets using a mult-channel, and quite a lot logic handling websockets.
Anytime you want to wait for a message across a set of channels, you’re in the core.async sweet spot.
I have used them to good effect in managing work pools especially where I need to be able to insert priority messages like config changes and shutdown events, and moreover guarantee bounded graceful termination.
I don't think we've found the right abstraction to mix the two. Regular old async might be better, focus on simple design instead
core.async is fundamentally a tool for scaling. There’s very little core.async can do that you cannot accomplish with java threads and queues.
But core.async handles coordination in the JVM instead of passing it off to the OS, so it’s less memory/cpu intensive.
I wonder how much overhead is added by allocation and locking. core.async isn't lockless, it's just a good abstraction. Still leaks color if you mix it with business logic
My understanding is the bulk of the overhead of threads is 1) stack allocation for each thread, and 2) context switches between userland and kernalland.
But if that’s the case, then locking is only problematic when it involves a context switch.
@zaymon.a.fc Realistically, watching the buglist die down on Project loom my impression is you can expect more scaleable non-async code in probably 3 years
if your app won't be netflix big by that point its a probably smarter buisiness decision to save developer time and just write sync code thats slower than it needs to be
core.async is still a good tool, but the tradeoff of faster code for worse stack traces, limited syntax, etc isn't great
I keep thinking if I'm missing out on not using core.async - all services used in our product rely on threads/threadpools/futures/some of the j.u.concurrent stuff - maybe because we have to ensure that any given service always has at least 2 instances, I'm not sure what is the actual benefit of using core.async
so if your threadpool is size 50, the most concurrent requests you can handle on a machine is 50
core.async lets you make each thread juggle between different tasks with explicit yield points (cooperative concurrency)
Right, I see that. I guess it's the nature of what we're building: a lot of slow outbound network requests to external services, which are heavily rate limited, having up 50 concurrent requests doesn't really speed up anything
yeah - and thats fine. A huge problem with async stuff existing is that it scratches the itch in our brains that tells us to be fast and do fast thing
and very few of us are netflix scale, but if you buy into async you start making all your libraries follow async patterns and split the ecosystem
Yeah, exactly, I might as well just spin up more service instances than squeeze more performance out of a single app
but you have to admit that knowing that - that your time and effort is worth more than 5 extra machines or whatever - isn't obvious to everyone
so as an ecosystem and a community we will be a lot better off with loom even if its not absolutely perfect
One thing I want to see tried is the whole suggestion by the JVM team that you should handle rate limiting stuff with semaphores instead of just having 10 threads
I wonder if this kind of bias comes from folks having primarily software eng background, once you get into operations and how things actually run and have to serve customers, a lot of assumptions are invalidated because of practical reasons. Regardless, project Loom is really exciting and even if it delivers 80% of it's promises it's going to be a massive step forward
2 semaphores: 1 for rate limiting (so you block on this one for a time), 1 for backpressure (so you immediately reject)
1. The java api for it. In clojure we can make (with-semaphore semaphore ...) so we will be fine, but the java api might be permanently clunky (ecosystem level)
2. Refactoring existing code might not be straight forward. Its all kosher if your api accesses are "well designed" and in a single namespace, but I doubt thats all code that implicitly relies on assumptions of max N parallelism
3. I never even thought of 2 semaphores. Education on this is bad and I am actively interested and still basically an idiot
4. Locks be hard to get right or remember to do. Esp on a team - might get pointless deadlock (i think)
it would make the cooperative concurrency use cases a lot more narrow even if it isn't perfect for every use