This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-09-10
Channels
- # beginners (40)
- # boot (307)
- # boulder-clojurians (2)
- # carry (3)
- # cljs-dev (3)
- # cljsjs (16)
- # clojure (42)
- # clojure-greece (3)
- # clojure-russia (10)
- # clojure-uk (3)
- # clojurescript (116)
- # community-development (1)
- # component (5)
- # conf-proposals (2)
- # core-async (1)
- # crypto (2)
- # cursive (3)
- # devcards (1)
- # events (1)
- # hoplon (123)
- # om (28)
- # onyx (17)
- # pedestal (3)
- # proton (1)
- # re-frame (18)
- # reagent (26)
if I get onyx sending metrics to riemann and I have a riemann dashboard, is there a known easy way I can get those stats showing on the riemann dashboard?
@michaeldrogalis Do emitted Onyx metrics take into account the parallelism of tasks?
Is there a document somewhere that would help me make sense of all these stats? Not sure the difference between max batch latency and percentile batch latency
@aengelberg Generally the receiver of the metrics coalesces the values based on the host that sent it, so you can roll up the values as a whole, or slice it by host.
We should probably get descriptions on the metric names. Most of them are obvious, but a few aren't.
I'm out now, can answer more questions tomorrow.
But it sounds like I shouldn't expect to get useful information if I just use the Riemann dashboard?
For example, if I query for tagged "throughput_60s"
, the dashboard will always be showing me one random metric sent by one virtual peer, not the total throughput across peers
(Onyx noob here) I am working with onyx and my input node is using the onyx-kafka
plugin. Does anyone has any insight into :kafka/fetch-size
and its impact on performance?
I am processing rather large messages and increasing it by a factor of 10 from the default value has a dramatic improvement in my setup. I am trying to understand this behaviour. Any help would be much appreciated!
The other property which is effecting performance independently of fetch-size
is empty-read-back-off
. If I just reduce that to 50
from 500
, I see big improvement in performance.
@rohit: fetch size will help you read more messages in one go especially if the messages are large, I believe. I'm a bit surprised that the empty read back off is hurting you throughput wise. Once you do an empty read back off there should be enough messages that you will read a bunch again and things will go fast. Hmm
@lucasbradstreet: I am surprised as well. I am going to investigate this. thanks!
Thank you. Let us know how you go. It's possible there's a bug