Fork me on GitHub
#core-async
<
2017-08-29
>
rwilson17:08:42

Does anybody know of a lib that helps monitor core.async data flows, in some sort of aggregate / sampled / etc sort of way?

rwilson17:08:11

For context, I'm considering building something like a core.async buffer that would support logging or reporting things like: • throughput per configurable interval (in # of messaes) • warning when the buffer length exceeds some % of defined capacity (e.g. "unhealthy"), with some backoff to keep it in check • warning when items are dropped (in sliding or dropping case), again with backoff

noisesmith17:08:39

you could just attach this as a transducer to a channel

noisesmith17:08:55

oh, except monitoring channel buffer usage I guess

rwilson17:08:23

Yeah, I figured the transducer case handles the throughput sufficiently, but doesn't give insight into consumption on the other end

noisesmith17:08:57

well, that’s why you put a transducer on the other end too heh

noisesmith18:08:55

yeah, seems like extending channel would be the way to do this (not so hard to do with a deftype)

rwilson18:08:29

Perhaps so, I'll look into that

rwilson18:08:32

This came out of first building a core.async system using blocking buffers, which was fine in production until a spike of input triggered an exception case that went unhandled and caused some consumers to stop consuming and the whole thing came to a halt.

rwilson18:08:22

Generally, I've found nonblocking buffers work well for building a self-healing core.async system, but in doing so it's easy to lose insight into how healthy the system is.

rwilson18:08:33

Hence, the desire for the buffer-level monitoring.

rwilson18:08:02

That and I've used the carmine lib for redis quite a bit, and it's monitoring fn has a nice property of being able to log warnings when the mq size exceeds a threshold

noisesmith18:08:23

if what matters most is buffer utilization, the parsimonious thing for that is a custom buffer implementation - though re-implementing channel is also doable it’s not as simple as that

rplevy21:08:02

Are there any known best practices for sente->core.async->om sort of apps when dealing with higher throughput data pipelines? The obvious approach of sente's pushed-msg-handler sticking data in om seem to not be able to handle a lot of data. Some kind of backpressure is needed.

hiredman23:08:13

I haven't work with sente or om, I would hope that sente provided some kind of flow control, but if it doesn't and just relies on the underlying transport to do flow control from what I have read websocket flow control is kind of hit or miss. if om exposes some kind of call back to tell you when it is done handling the data you push in, you should be able do something like https://github.com/hiredman/roundabout/blob/master/src/com/manigfeald/roundabout.cljc

hiredman23:08:59

I guess if sente is using put!(which I think you pretty much have to on the clojurescript side) in the message handler on the client side that would break any notion of feedback with the websocket anyway