Fork me on GitHub
#core-async
<
2019-01-09
>
souenzzo20:01:53

I'm doing some macros with loop/recur + async/go and it works in clj but in cljs thows "cant recur from..." Is it a common problem?

souenzzo21:01:04

clj -Srepro -Sdeps '{:deps {cljs-bug {:git/url "" :sha "eded47447764e7eb8d4cb25be1d77640e9c50aaf"}}}' -m hack ## works!
clj -Srepro -Sdeps '{:deps {cljs-bug {:git/url "" :sha "eded47447764e7eb8d4cb25be1d77640e9c50aaf"}}}' -m cljs.main --repl-env node -m hack ##  Can't recur here at line 44 hack.cljc
https://gist.github.com/souenzzo/2d3e171bf718e32becf3cb9d15ea6dfa

mauricio.szabo21:01:06

Hello there. I'm trying to understand the "right" way of doing async with core.async. Is there any complete guide? For example, I'm trying to avoid callbacks (in CLJS and in Clojure) but everytime I'm hitting the 1024 limit of pending operations... in the end, I find myself maping over a bunch of go blocks and waiting for then, and never using channels, but I don't think that's the way it's supposed to work...

hiredman21:01:06

if you are hitting the 1024 limit you are likely not communicating back pressure correctly

hiredman21:01:16

e.g. using put! to publish to a channel without any communication to publishers to slow down or stop publishing if the consumers are going slower then the publishsers

hiredman21:01:16

or if you are spinning up new go blocks all the time and they are writing to a channel faster then the consumer can keep up, and the consumer has no way to signal to whatever is spinning up the go blocks that it needs to slow down

hiredman21:01:20

an analogy for a core.async program is some kind of system of pipes (maybe water or natural gas), hitting the 1024 limit is like blowing a section of a pipe due to over pressure

mauricio.szabo21:01:45

But that's something that I don't understand: if I need to communicate to the publisher that it needs to slow down, it means that my concurrent code becomes harder to use than easier. For example, I saw multiple posts about using core.async to avoid ClojureScript callbacks. For example, with Node.JS sockets or websockets when I'll receive a callback when a new message is sent... how do I control that it needs to stop publishing because, at this time, there are more messages than what I was expecting?

mauricio.szabo21:01:40

If I have a system that it's idle 90% of the time and suddenly there's a big flow of messages, how do I write this system with core.async?

serioga09:01:33

the simplest way is to use sliding/dropping buffers, then messages will be just lost on peaks.

serioga09:01:22

if you want to manage publisher state accordingly to consumer loading then there should be a feedback loop between customer and publisher

bortexz18:01:17

Hi @U0HJNJWJH I have a similar case when I’d like some feedback and control messages between producers and consumers, do you know of any reference or patterns to read?

bortexz18:01:47

I’ll have a look at them, thanks!

hiredman21:01:01

it is complicated, some of those apis are badly designed and don't communicate back pressure well, you may have to make it part of your communication protocol

hiredman21:01:37

it doesn't matter how idle it is, it the balance of producing and consuming is what matters

hiredman21:01:31

to stretch the physical pipe analogy

hiredman21:01:57

you can put a large buffer on a channel, which is like increasing the size of the pipe

hiredman21:01:11

so it can carry more before pressure starts to build

hiredman21:01:56

it has been a while since I watched it, but I think https://www.youtube.com/watch?v=1bNOO3xxMc0 has a pretty good discussion of back pressure (not core.async specific)

hiredman21:01:12

if the communication api you are using doesn't provide for backpressure, you can implement it yourself on top of the communication api, https://github.com/hiredman/roundabout/blob/master/src/com/manigfeald/roundabout.cljc is a rough example, works sort of like tcp flow control

hiredman21:01:33

at one point the in browser websocket impls didn't handle flow control well (https://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0217.html) I don't know if that is still correct or if the state of the art has improved

serioga09:01:33

the simplest way is to use sliding/dropping buffers, then messages will be just lost on peaks.

serioga09:01:22

if you want to manage publisher state accordingly to consumer loading then there should be a feedback loop between customer and publisher