Fork me on GitHub
#core-async
<
2019-12-02
>
yonatanel14:12:50

Hi, I’m writing a channel producer which continuously gets collections with hundreds of items each and puts the items one by one in the channel. I’d like to avoid these one-by-one puts by using cat as xf and have a single put expand an entire collection in the buffer, but there’s an open bug where a single take will execute the next pending put, so for each single take the buffer will grow by hundreds and there will be no backpressure: https://clojure.atlassian.net/browse/ASYNC-210. Any advice? This breaks the backpressure semantics entirely.

vemv15:12:09

Can alts!! be worse than (<!! (go (alts! ...))) ? I could imagine that only the latter makes use of IOC macros, so the former cannot block on N "alts" with the same fairness/etc

Alex Miller (Clojure team)16:12:31

I don't understand the question

Alex Miller (Clojure team)16:12:01

alts!! can block on N alts

vemv16:12:16

> alts!! can block on N alts yes, I have always perceived it that way. but now I'm encountering some weird issue my thinking is: how does alts!! achieve that that multi-blocking? generally, regular threads cannot do that. so I assume alts!! does some magic. now, is that magic strictly equivalent, or somehow worse to doing the work in a go ?

Alex Miller (Clojure team)16:12:13

I think it's the exact same code iirc

👍 4
Alex Miller (Clojure team)16:12:58

I do not have all that context loaded in my head to answer definitively though

👍 4
Alex Miller (Clojure team)16:12:12

yeah, both cases route to the impl function do-alts

vemv16:12:11

ace, thank you!

ghadi17:12:15

@vemv each op in an alts expr compete to flip a shared flag

ghadi17:12:52

Exactly one op will win and resume the thread/go

💯 4
hiredman17:12:36

https://wingolog.org/archives/2017/06/29/a-new-concurrent-ml describes a select operation which is pretty roughly equivalent to what alts does. It can be read as a sort of high level abstract overview of core.async internals even though it isn't about clojure or core.async

Alex Miller (Clojure team)17:12:54

and incidentally, alts is imo one of the killer features of core.async

💯 8
jjttjj22:12:44

What's everyone's general feeling on using core.async as a central message dispatch system, either via pub or mult? For example, having server sent events all come into one async channel to a clojurescript client, making one or more pubs/`mult`s on it, and passing those as a system component to ui components which can sub/`tap` as needed? My thought was that this might be able to replace something like re-frame's dispatch system. But I tried this out and found I end up getting tripped up a lot, and that the bugs have seemed particularly hard to track down and I'm curious if I'm "doing it wrong" or if this is just me having holes in my knowledge/experience/understanding with core.async to make these types of errors? My thinking now is that core.async is better to use as locally as possible, at the point where coordination between multiple streams are needed. Because in the alternative, taping locally a central mult. you make your local tap dependent on all other subscribers, because if any of them block so will it. Would anyone agree that exposing a mult or pub as a central system component is a bad idea? Or is this on me for not being diligent enough about my channels not blocking?

bortexz08:12:39

In the UI, I prefer the re-frame dispatch system, it’s a layer on top of a queue system (that could be implemented on top of core.async, although it has it’s own queue implementation I think), and the re-frame subscriptions allow for derived data to be described more easily than manually taping to channels. In re-frame you don’t need to worry about blocking. When having a channel that receives all the events from the server (e.g. websocket), I would dispatch to reframe on a consumer for that channel, and keep re-frame for UI updates. If the server sends huge amounts of events that you might want to discard/buffer some of them, In that case it could make sense to have some core.async design with buffers for that, but I would still keep the UI updates for re-frame. In backend I usually have more need for the fine-grained control over the channels and buffers, in part because of multi-threading, In part because the number of events to handle is bigger, and I end up with designs similar to what you described.

Jan K14:12:08

I'm using core.async pub/sub more on server-side. I did have trouble with subtle races and deadlocks for some time, but much less after I made simplified facade/API on top of core.async pub/sub which is closely tailored to my use-cases (rather than exposing the raw pub). I'm also using custom non-blocking buffers that log warnings when they start filling up, which helps debugging and prevents subs from blocking the publisher.

jjttjj17:12:57

I like the idea of warning buffers.

jjttjj17:12:49

And I feel the same about core.async feeling more natural server-side, though I can't quite put my finger on why and it doesn't seem fully explained by the obvious platform differences (ie threads).

bortexz21:12:54

It might also be that the pattern of having a store as single source of truth for your app, as re-frame does, is more extended in the frontend and goes along what redux in js land does, while in the backend there are more moving pieces with their own “state”, decoupled from each other