Fork me on GitHub

if I get onyx sending metrics to riemann and I have a riemann dashboard, is there a known easy way I can get those stats showing on the riemann dashboard?


Not sure, I'm using grafana/influx/Riemann stack


Basing it off the onyx-benchmark repo


@michaeldrogalis Do emitted Onyx metrics take into account the parallelism of tasks?


Is there a document somewhere that would help me make sense of all these stats? Not sure the difference between max batch latency and percentile batch latency


@aengelberg Generally the receiver of the metrics coalesces the values based on the host that sent it, so you can roll up the values as a whole, or slice it by host.


We should probably get descriptions on the metric names. Most of them are obvious, but a few aren't.


I'm out now, can answer more questions tomorrow.


But it sounds like I shouldn't expect to get useful information if I just use the Riemann dashboard?


For example, if I query for tagged "throughput_60s", the dashboard will always be showing me one random metric sent by one virtual peer, not the total throughput across peers


(Onyx noob here) I am working with onyx and my input node is using the onyx-kafka plugin. Does anyone has any insight into :kafka/fetch-size and its impact on performance?


I am processing rather large messages and increasing it by a factor of 10 from the default value has a dramatic improvement in my setup. I am trying to understand this behaviour. Any help would be much appreciated!


The other property which is effecting performance independently of fetch-size is empty-read-back-off. If I just reduce that to 50 from 500, I see big improvement in performance.


@rohit: fetch size will help you read more messages in one go especially if the messages are large, I believe. I'm a bit surprised that the empty read back off is hurting you throughput wise. Once you do an empty read back off there should be enough messages that you will read a bunch again and things will go fast. Hmm


@lucasbradstreet: I am surprised as well. I am going to investigate this. thanks!


Thank you. Let us know how you go. It's possible there's a bug