Fork me on GitHub
#onyx
<
2016-09-10
>
aengelberg00:09:04

if I get onyx sending metrics to riemann and I have a riemann dashboard, is there a known easy way I can get those stats showing on the riemann dashboard?

Travis01:09:35

Not sure, I'm using grafana/influx/Riemann stack

Travis01:09:03

Basing it off the onyx-benchmark repo

aengelberg02:09:51

@michaeldrogalis Do emitted Onyx metrics take into account the parallelism of tasks?

aengelberg03:09:32

Is there a document somewhere that would help me make sense of all these stats? Not sure the difference between max batch latency and percentile batch latency

michaeldrogalis03:09:32

@aengelberg Generally the receiver of the metrics coalesces the values based on the host that sent it, so you can roll up the values as a whole, or slice it by host.

michaeldrogalis03:09:11

We should probably get descriptions on the metric names. Most of them are obvious, but a few aren't.

michaeldrogalis03:09:23

I'm out now, can answer more questions tomorrow.

aengelberg03:09:15

But it sounds like I shouldn't expect to get useful information if I just use the Riemann dashboard?

aengelberg03:09:59

For example, if I query for tagged "throughput_60s", the dashboard will always be showing me one random metric sent by one virtual peer, not the total throughput across peers

rohit10:09:58

(Onyx noob here) I am working with onyx and my input node is using the onyx-kafka plugin. Does anyone has any insight into :kafka/fetch-size and its impact on performance?

rohit10:09:40

I am processing rather large messages and increasing it by a factor of 10 from the default value has a dramatic improvement in my setup. I am trying to understand this behaviour. Any help would be much appreciated!

rohit11:09:32

The other property which is effecting performance independently of fetch-size is empty-read-back-off. If I just reduce that to 50 from 500, I see big improvement in performance.

lucasbradstreet12:09:16

@rohit: fetch size will help you read more messages in one go especially if the messages are large, I believe. I'm a bit surprised that the empty read back off is hurting you throughput wise. Once you do an empty read back off there should be enough messages that you will read a bunch again and things will go fast. Hmm

rohit12:09:02

@lucasbradstreet: I am surprised as well. I am going to investigate this. thanks!

lucasbradstreet12:09:25

Thank you. Let us know how you go. It's possible there's a bug