This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-08-29
Channels
- # aws (1)
- # beginners (78)
- # boot (27)
- # cider (16)
- # clara (15)
- # cljs-dev (84)
- # cljsjs (13)
- # cljsrn (19)
- # clojure (65)
- # clojure-france (10)
- # clojure-italy (8)
- # clojure-russia (35)
- # clojure-spec (34)
- # clojure-uk (124)
- # clojurescript (50)
- # clojutre (3)
- # core-async (16)
- # data-science (18)
- # datascript (1)
- # datomic (9)
- # emacs (2)
- # flambo (3)
- # fulcro (55)
- # graphql (3)
- # hoplon (4)
- # jobs (2)
- # juxt (21)
- # keechma (6)
- # lumo (73)
- # off-topic (4)
- # om (10)
- # onyx (5)
- # parinfer (1)
- # pedestal (3)
- # re-frame (60)
- # reagent (19)
- # specter (24)
Does anybody know of a lib that helps monitor core.async data flows, in some sort of aggregate / sampled / etc sort of way?
For context, I'm considering building something like a core.async buffer that would support logging or reporting things like: • throughput per configurable interval (in # of messaes) • warning when the buffer length exceeds some % of defined capacity (e.g. "unhealthy"), with some backoff to keep it in check • warning when items are dropped (in sliding or dropping case), again with backoff
you could just attach this as a transducer to a channel
oh, except monitoring channel buffer usage I guess
Yeah, I figured the transducer case handles the throughput sufficiently, but doesn't give insight into consumption on the other end
well, that’s why you put a transducer on the other end too heh
yeah, seems like extending channel would be the way to do this (not so hard to do with a deftype)
This came out of first building a core.async system using blocking buffers, which was fine in production until a spike of input triggered an exception case that went unhandled and caused some consumers to stop consuming and the whole thing came to a halt.
Generally, I've found nonblocking buffers work well for building a self-healing core.async system, but in doing so it's easy to lose insight into how healthy the system is.
That and I've used the carmine lib for redis quite a bit, and it's monitoring fn has a nice property of being able to log warnings when the mq size exceeds a threshold
if what matters most is buffer utilization, the parsimonious thing for that is a custom buffer implementation - though re-implementing channel is also doable it’s not as simple as that
Are there any known best practices for sente->core.async->om sort of apps when dealing with higher throughput data pipelines? The obvious approach of sente's pushed-msg-handler sticking data in om seem to not be able to handle a lot of data. Some kind of backpressure is needed.
I haven't work with sente or om, I would hope that sente provided some kind of flow control, but if it doesn't and just relies on the underlying transport to do flow control from what I have read websocket flow control is kind of hit or miss. if om exposes some kind of call back to tell you when it is done handling the data you push in, you should be able do something like https://github.com/hiredman/roundabout/blob/master/src/com/manigfeald/roundabout.cljc