Fork me on GitHub
#architecture
<
2021-07-10
>
Zaymon04:07:17

Where does core.async fit in with web applications? At what point would you decide to use it? Where has it helped solve a problem in your application?

3
Linus Ericsson17:07:03

Pushing out changes to various clients via websockets using a mult-channel, and quite a lot logic handling websockets.

donaldball18:07:54

Anytime you want to wait for a message across a set of channels, you’re in the core.async sweet spot.

donaldball18:07:38

I have used them to good effect in managing work pools especially where I need to be able to insert priority messages like config changes and shutdown events, and moreover guarantee bounded graceful termination.

Ben Sless18:07:45

@U04V4HWQ4 that's a very good use case, can you elaborate more on it?

fabrao12:07:42

Mauricio from Chlorine said you should use promise instead of core.async

Jakub Holý19:07:38

I would say, as always, it depends. Promise is simple. Core.async is powerful.

Ben Sless13:07:01

I don't think we've found the right abstraction to mix the two. Regular old async might be better, focus on simple design instead

potetm14:07:25

core.async is fundamentally a tool for scaling. There’s very little core.async can do that you cannot accomplish with java threads and queues.

potetm14:07:14

But core.async handles coordination in the JVM instead of passing it off to the OS, so it’s less memory/cpu intensive.

Ben Sless14:07:25

I wonder how much overhead is added by allocation and locking. core.async isn't lockless, it's just a good abstraction. Still leaks color if you mix it with business logic

potetm17:07:21

My understanding is the bulk of the overhead of threads is 1) stack allocation for each thread, and 2) context switches between userland and kernalland.

potetm17:07:33

Could be wrong. It’s not exactly my realm of expertise.

potetm17:07:48

But if that’s the case, then locking is only problematic when it involves a context switch.

emccue19:07:45

@zaymon.a.fc Realistically, watching the buglist die down on Project loom my impression is you can expect more scaleable non-async code in probably 3 years

emccue19:07:29

if your app won't be netflix big by that point its a probably smarter buisiness decision to save developer time and just write sync code thats slower than it needs to be

emccue19:07:33

core.async is still a good tool, but the tradeoff of faster code for worse stack traces, limited syntax, etc isn't great

emccue19:07:11

core.async did it better than most, but the era of that tradeoff is near its end

emccue19:07:08

at the least for the jvm - <!! and <! could be the same

lukasz19:07:14

I keep thinking if I'm missing out on not using core.async - all services used in our product rely on threads/threadpools/futures/some of the j.u.concurrent stuff - maybe because we have to ensure that any given service always has at least 2 instances, I'm not sure what is the actual benefit of using core.async

emccue19:07:54

Just like every other async thing, its basically library level green threads

emccue19:07:33

your machine can wait on way more IO connections than you can make OS threads for

emccue19:07:19

so if your threadpool is size 50, the most concurrent requests you can handle on a machine is 50

emccue19:07:51

core.async lets you make each thread juggle between different tasks with explicit yield points (cooperative concurrency)

emccue19:07:20

and that in turn lets you handle more than 50 tasks on 50 OS threads

emccue19:07:53

something something littles law

lukasz19:07:48

Right, I see that. I guess it's the nature of what we're building: a lot of slow outbound network requests to external services, which are heavily rate limited, having up 50 concurrent requests doesn't really speed up anything

emccue19:07:04

yeah - and thats fine. A huge problem with async stuff existing is that it scratches the itch in our brains that tells us to be fast and do fast thing

emccue19:07:00

and very few of us are netflix scale, but if you buy into async you start making all your libraries follow async patterns and split the ecosystem

lukasz19:07:44

Yeah, exactly, I might as well just spin up more service instances than squeeze more performance out of a single app

emccue19:07:55

but you have to admit that knowing that - that your time and effort is worth more than 5 extra machines or whatever - isn't obvious to everyone

emccue19:07:00

so as an ecosystem and a community we will be a lot better off with loom even if its not absolutely perfect

👆 2
emccue19:07:52

One thing I want to see tried is the whole suggestion by the JVM team that you should handle rate limiting stuff with semaphores instead of just having 10 threads

emccue19:07:22

since im not exactly convinced that works for everyone

lukasz19:07:44

I wonder if this kind of bias comes from folks having primarily software eng background, once you get into operations and how things actually run and have to serve customers, a lot of assumptions are invalidated because of practical reasons. Regardless, project Loom is really exciting and even if it delivers 80% of it's promises it's going to be a massive step forward

potetm19:07:15

@U3JH98J4R why wouldn’t that work for someone?

potetm19:07:20

2 semaphores: 1 for rate limiting (so you block on this one for a time), 1 for backpressure (so you immediately reject)

potetm19:07:39

I’ve been thinking about this lately, so I’m curious your thoughts

emccue19:07:22

It works conceptually - you can always write new code that does it that way

emccue19:07:34

but i'm concerned about

emccue19:07:35

1. The java api for it. In clojure we can make (with-semaphore semaphore ...) so we will be fine, but the java api might be permanently clunky (ecosystem level)

emccue19:07:06

2. Refactoring existing code might not be straight forward. Its all kosher if your api accesses are "well designed" and in a single namespace, but I doubt thats all code that implicitly relies on assumptions of max N parallelism

emccue19:07:55

3. I never even thought of 2 semaphores. Education on this is bad and I am actively interested and still basically an idiot

emccue19:07:19

4. Locks be hard to get right or remember to do. Esp on a team - might get pointless deadlock (i think)

emccue19:07:37

5. just a general vibe of uncertainty that lives rent free in my brain

potetm22:07:43

yeah semaphores aren’t quite like a regular lock

potetm22:07:09

it’s fairly easy to avoid deadlock

Ben Sless20:07:52

There's some caveats to that. vthreads aren't free

emccue20:07:39

neither are go blocks

emccue20:07:09

it would make the cooperative concurrency use cases a lot more narrow even if it isn't perfect for every use

Ben Sless20:07:37

True, it's just there are always surprises on that end of the spectrum, whatever implementation you go with