This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-10-25
Channels
- # 100-days-of-code (6)
- # announcements (4)
- # aws (2)
- # beginners (151)
- # boot (1)
- # calva (1)
- # cider (19)
- # clara (47)
- # cljdoc (9)
- # cljs-dev (25)
- # clojars (18)
- # clojure (151)
- # clojure-canada (1)
- # clojure-conj (1)
- # clojure-dev (17)
- # clojure-italy (42)
- # clojure-nl (34)
- # clojure-spec (67)
- # clojure-uk (125)
- # clojurescript (163)
- # core-async (106)
- # cursive (19)
- # data-science (11)
- # datomic (9)
- # duct (2)
- # figwheel (1)
- # figwheel-main (6)
- # fulcro (97)
- # graphql (9)
- # instaparse (4)
- # jobs (6)
- # jobs-discuss (21)
- # leiningen (62)
- # mount (23)
- # off-topic (16)
- # re-frame (15)
- # reagent (16)
- # reitit (5)
- # remote-jobs (1)
- # ring-swagger (9)
- # shadow-cljs (176)
- # tools-deps (102)
- # unrepl (3)
Occam is fun! I did that at university!
(and went on to write a compiler from actuarial formulae to Parallel C on a grid of Transputers!)
Well, Go's machinery and core.async are both based on CSP for which Occam is sort of the "reference implementation".
So anything based on CSP is likely to use alt
I suspect.
Ah. Yeah, I was specifically googling for what term most other CSP implementations used. Didn't realize Occam was the original implementation
(I guess my question would be "What language(s) use select
?")
But I also do (not (empty? col))
instead of (seq col)
like people on the internet say to do so shrug
Yeah, alt!
seems natural if you're used to CSP in any form. But it's all about idioms and I will say that core.async
has always made a lot of assumptions in that area.
(def c (a/chan nil))
(a/put! c :val)
(a/take! c #(debug %))
I can put many times before taking. Doesn’t this mean the channel has a buffer? How does this work again?I think that might be that one https://vimeo.com/100518968
put! is async, if you want to have blocking behavior constrained by the buffer size you need to use >!!
but I forgot. I never used channel buffers that much. how does it differ from the put buffer?
I think if there's room in the channel buffer the value will not pass by it (the pending put buffer)
I do use put!
though. I don’t have to block. I use it for server sent events. Each client (webbrowser connected to our webserver) has a channel. I put some messages in it. When the client connects (or already is connected), it reads those messages.
(defn send-message-to-user
[sse user-id message]
(let [c (channel-for-user sse user-id)]
(a/put! c message)))
there doesn’t have to be. if the client will never connect, I don’t want to block anything.
I am not familiar with sse, but I used to do something similar with ws, usually you want to have control over the rate of sending of those message, slow consumers can cause issues (waste etc)
I guess it also depends on the use case (long lived connections or just small one time burst of messages)
SSE is similar to WS, only the communication is one way and it’s over HTTP so I can use all my interceptors.
Actually I want to dedupe on the pending put channel I think. When the client is not connected yet, I don’t want to send the same notification more than once. But once connected, it can be that I send the same message twice with a 15 minute interval, that should actually be received.
also the pending put! queue is also something that will show up in other contexts. A put can happen in a go block via >! and so on. It's generally good to understand this stuff when using core async
you mean the go block will blow up. but a put! won’t blow up, it will just return false?
any 'async' puts on a channel (no matter from where) that's full (or without a buffer) and has already 1024 pending puts, will blow up, in go block or otherwise
> java.lang.AssertionError: Assert failed: No more than 1024 pending puts are allowed on a single channel. Consider using a windowed buffer. > (< (.size puts) impl/MAX-QUEUE-SIZE)`
Good to know. Maybe it makes sense to not even send a message when the client is not connected. I can solve things a different way.
I am not sure about the sse part of pedestal, but the ws side of thing had (maybe still has) some questionable code
They way I use it: send simple command via SSE. Then the client knows it should do a certain request. So I can hook into the same re-frame events we already had for those requests.
No “big” data goes over the SSE wire, that is done via normal requests, possibly transit etc.
race condition of the day: would you expect this will blow up in about 1% calls? async stuff is so hard 😞
(let [ch (chan)
sub-ch (chan)
p (pub ch (constantly :topic))]
(>!! ch :hi)
(sub p :topic sub-ch)
(Thread/sleep 1)
(assert (not (poll! sub-ch))))))
@borkdude aleph is fine with websockets - we use yada but drop to aleph for websocket support
@mccraigmccraig ok cool. I’m managing with SSE right now.
Can someone explain core.async garbage collection to me? E.g., if I have pipe a, mult b (of a) and pipe c, and my code only keeps a reference to c, to read out the result, will a and b be garbage collected? They probably wouldn't, because the tap is implemented as a goroutine, which still has a reference to the mult, which in turn has a reference to a. If I then untap the link between b and c, a and b would eventually get garbage collected?
(assuming there is a tap from b to c before, sorry if the explanation isn't very clear 😕 )
maybe post some code matching what you are thinking @mrchance - i'm having difficulty visualizing
sure, so my conclusion above is correct? It will continue to run and not be garbage collected until I call untap?
sounds like I have to watch one of @tbaldridge s talks about async internals and the go macro 😉
actually, c may or may not have a mult callback attached to it depending on where the mult is in its loop, and that mult callback might close over the reference to a
ah, but mult is actually written in such a way that its callback isn't ever attached to c actually
What do you mean by "not attached to c"? I mean, it has to get stuff into c somehow, right?
it does, but it does it in a very indirect way, and actually it is the case that the mult callback will be referenced from c, but it is very and intermediate internal channel
c -> mutl internal done callback that doesn't reference a -> dchan -> callback from the go block in the mult that references a
so refering to c may or maybe not keep the mults state (which includes a) alive depending on if the mult is stalled on waiting for input or waiting to output
But in general, a rule of thumb is that the channels are the central constructs, and only keeping them around and referenced will likely ensure that pipelines don't go away
Makes sense, actually, not very useful to have the pipeline still active when you can't actually put something in it
Is there any resource you recommend to learn more about it? The talks I mentioned? Reading the code?