This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-01-04
Channels
- # aleph (1)
- # asami (6)
- # babashka (44)
- # beginners (20)
- # calva (6)
- # circleci (1)
- # clj-kondo (2)
- # cljdoc (2)
- # clojure (184)
- # clojure-europe (13)
- # clojure-nl (4)
- # clojure-spain (1)
- # clojure-uk (4)
- # clojurescript (35)
- # code-reviews (1)
- # conjure (3)
- # core-async (60)
- # core-logic (1)
- # cursive (11)
- # data-science (2)
- # events (11)
- # graalvm (4)
- # graphql (2)
- # introduce-yourself (1)
- # jobs (2)
- # leiningen (3)
- # malli (16)
- # minecraft (6)
- # practicalli (1)
- # reagent (3)
- # reitit (1)
- # releases (3)
- # remote-jobs (2)
- # rewrite-clj (21)
- # shadow-cljs (12)
- # tools-deps (21)
- # vim (16)
there's a way to have a channel that has java.lang.AutoCloseable implemented?
but It would be good to use channels in a system using https://github.com/piotr-yuxuan/closeable-map
It doesn't make sense to not bound channels to an outside system configuration and make handlers channel dependant that is in your system-map
to get rid of global unmanageable configurations
btw, it's very sad that "channels outlive lexical scope"
Every function which returns a channel or go block returns a channel which outlives its lexical scope
but, if I got a track on a system map for the chans that I've started and try to guarantee that they're close everytime I restart a system
seems reasonable
For clean shut down, the producer has to finish, then you close the channel, then the consumers
what happens if you close a channel before all its items are consumed?
I need to think about close a channel giving me some guarantees on every stuff being consumed.
I'm not an expert, but distributed systems shutdown is not a trivial problem. if you plan to do something like this, I guess you should: 1. stop the webserver traffic: either reject/redirect request or deactive instance network traffic. 2. send the stop signal, i.e. close the main channel and wait for all threads to finish 3. as @ben.sless said as each consumer receives the eof signal, it forwards the signal down the pipeline DAG . some interesting questions: what happens if you get duplicated eof? do you need to coordinate a consumer group/level shutdown? what happens if eof if forwarded to next level before sibling consumers are done? 4. when the DAG is done, stop/quit the process
I agree with this for sure, but: 1. core.async does not seems a tool for this (???) 2. the system that I'm working doesn't have a way to shutdown in a way that I can build test scenarios, lot stuff coupled with these core.async shutdown triggers, etc.
When building pipelines I go by this principle - clean up after yourself. Producers are responsible for their channels. When they are done they close them, thus signalling to downstream consumers to shut down
I guarantee that my producer is finished
When consumers take nil from a channel it is closed and can initiate their own shutdown
I'll add in the end in the system a kw like this ::closeable-map/after-close (fn [{::keys [my-chan]}] (a/close! mychan))
then I think it'll handle 🙂
because the producers are in the "other modules"
yes, but they live in a webserver that I stop before the channels
I need to think about close a channel giving me some guarantees on every stuff being consumed.
why it should matter? :thinking_face:
sorry, kinda newbie here 😅
It's getting late here so I'd rather continue tomorrow but tldr: blocking in a web handler is a recipe for trouble
which would be the length of my channel buffer? a number of webserver connections limit?
got wrong question, I've edited it
In steady state, a channel is either full or empty. You only need a buffer to handle spikes
But channels don't give you feedback, and I guess you don't produce with alts and timeout
without blocking
Just out of curiosity, from previous conversation... Is it posible to have an "eof race condition" when closing a channel to multiple consumers? By "eof race condition" I mean that one go block/threads see the eof before the other sibling consumers, an so the "first" to see the eof would forward the eof before the siblings are done?
that's not how it works
channels are stateful, and state changes occur under a lock
so there is no visibility race
if two consumers go to consume, that is a race of course in the way of any concurrent system and only one will "consume" a value. but eof is not a value consumers race to consume, but a condition triggered by a flag that consumers will observe (by receiving nil on consume)
thanks @alexmiller, great answer, I didn't mean inside core.async ,I meant if you design/build a DAG with consumer groups, (maybe not the best application for core.async). I guess from your answer that a complex DAG requires a coordinated shutdown anyway. @ben.sless yeah, maybe avoid complex DAG at all
I mean, you don't necessarily have to close at all
ohhh nice
and you could give processes an independent shutdown channel too, separate from data flow