Fork me on GitHub
#core-async
<
2016-12-30
>
danboykis00:12:00

@noisesmith you understood correctly, i am not sure i follow the second part though

noisesmith00:12:23

danboykis is http-kit-call async?

noisesmith00:12:11

if yes, then the buffering on that channel does nothing to slow your requests

noisesmith00:12:37

the go loop will grab more requests to send and send them as fast as they come in

noisesmith00:12:47

the buffering does nothing to limit the number of requests in flight

noisesmith00:12:31

if you use something that blocks (and switch from go-loop to thread with a loop) or parks, and start 1000 loops in either case, then you will know only 1000 requests maximum will be in flight

noisesmith00:12:50

number of loops controls parallelism, capacity of channel does not

noisesmith00:12:20

(and it loses control of parallelism if it calls something async internally)

danboykis00:12:25

that's why I sent another channel via slow-http-ch and then read from it in write-to-db

noisesmith00:12:37

but those things don't control parallelism either

noisesmith00:12:20

the size of slow-http-ch tells you how many clients can keep sending you data even though you are busy

noisesmith00:12:00

that doesn't slow anything down, it just hides a speed difference (until it gets overloaded, or something downstream from it goes to fast and breaks something)

danboykis00:12:56

i thought the first go-loop would block once the channel is full, no?

noisesmith00:12:33

right, but the thing reading the channel is not slowed down

noisesmith00:12:47

is it the writer that you are trying to slow? maybe I'm confused

noisesmith00:12:13

the problem is that the consumer of the slow-http-channel is calling something async

noisesmith00:12:27

which means that your backpressure doesn't apply

danboykis00:12:30

ok, I see what you're saying

danboykis00:12:17

i'm not sure what the best way to handle this situation is

noisesmith00:12:42

if you want N things in flight, then allow N loops to send one item at a time each

noisesmith00:12:13

another option is to have some sort of backpressure derived from your backend - eg. if the server takes more than N ms. to serve a response then add a sleep of N*x to your loop before sending the next request (this may be a silly way to do it, there's probably a better design)

danboykis00:12:52

the problem is that the this service just falls down

danboykis00:12:01

it doesn't really degrade much

noisesmith00:12:16

yeah, in that case you need to estimate and use artificial backpressure

noisesmith00:12:32

another option is a throttled channel - only lets things through at a fixed rate and applies backpressure

noisesmith00:12:49

there's various utilities in ztellman 's manifold library for this stuff

noisesmith00:12:30

in fact using a throttled channel with a good limit on it, with your original design (and without the 1000 size buffer) might just do what you want

noisesmith01:12:32

@danboykis https://github.com/ztellman/manifold/blob/master/README.md which includes throttle plus other useful async plumbing

danboykis01:12:23

thanks, I'll look into it!