This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-12-30
Channels
- # adventofcode (11)
- # beginners (155)
- # boot (627)
- # cider (64)
- # cljs-dev (110)
- # cljsrn (36)
- # clojure (290)
- # clojure-austin (21)
- # clojure-russia (2)
- # clojure-spec (2)
- # clojure-uk (21)
- # clojurescript (81)
- # code-reviews (2)
- # core-async (33)
- # cursive (6)
- # datomic (9)
- # emacs (1)
- # hoplon (472)
- # instaparse (1)
- # lein-figwheel (4)
- # luminus (9)
- # om (2)
- # protorepl (10)
- # re-frame (10)
- # reagent (48)
- # schema (2)
- # sql (5)
- # untangled (17)
- # vim (1)
- # yada (108)
@noisesmith you understood correctly, i am not sure i follow the second part though
danboykis is http-kit-call async?
if yes, then the buffering on that channel does nothing to slow your requests
the go loop will grab more requests to send and send them as fast as they come in
the buffering does nothing to limit the number of requests in flight
if you use something that blocks (and switch from go-loop to thread with a loop) or parks, and start 1000 loops in either case, then you will know only 1000 requests maximum will be in flight
number of loops controls parallelism, capacity of channel does not
(and it loses control of parallelism if it calls something async internally)
that's why I sent another channel via slow-http-ch
and then read from it in write-to-db
but those things don't control parallelism either
or do they?
the size of slow-http-ch tells you how many clients can keep sending you data even though you are busy
that doesn't slow anything down, it just hides a speed difference (until it gets overloaded, or something downstream from it goes to fast and breaks something)
right, but the thing reading the channel is not slowed down
is it the writer that you are trying to slow? maybe I'm confused
the problem is that the consumer of the slow-http-channel is calling something async
which means that your backpressure doesn't apply
if you want N things in flight, then allow N loops to send one item at a time each
another option is to have some sort of backpressure derived from your backend - eg. if the server takes more than N ms. to serve a response then add a sleep of N*x to your loop before sending the next request (this may be a silly way to do it, there's probably a better design)
yeah, in that case you need to estimate and use artificial backpressure
another option is a throttled channel - only lets things through at a fixed rate and applies backpressure
there's various utilities in ztellman 's manifold library for this stuff
in fact using a throttled channel with a good limit on it, with your original design (and without the 1000 size buffer) might just do what you want
@noisesmith you're talking about this: https://github.com/brunoV/throttler ?
@danboykis https://github.com/ztellman/manifold/blob/master/README.md which includes throttle plus other useful async plumbing