Fork me on GitHub

@jasonbell, Curious on how your managing your jobs in Mesos ? Are you submitting them through marathon or some other means ?


@jasonbell happy to talk to you about your CPU load issue when you come back on


@lucasbradstreet Is there docs on what is required to move to 0.10 ?


ah, i new I saw it somewhere


Let me know if you hit any issues that aren’t described there.


will do, doing some initial investigation to see what it would take for us.


@lucasbradstreet Is there a chance of a local fs durable storage check pointing ?


You can set it to checkpoint in ZooKeeper if you aren't checkpointing a lot of data. I'm not very inclined to checkpoint to a local fs since it's not useful for multi node use and there's the ZooKeeper impl for testing


gotcha, We have a use case were we are doing a single node ( kind of an internal process ). To reuse the process at much much less scale.


Good idea or not is yet to be determined, lol


Mmm, I can see that it could be useful. I’d accept a PR. It wouldn’t be too hard to implement.


cool, will take a look when i get time. Its definitely not a normal use case but trying to reuse our ingest that we have at scale to also use it in an appliance like setting for much smaller stuff.


Yeah, it’s nice to be able to scale up like that


@camechis In regards to your mesos question, yes deploying through Marathon.


Good morning @jasonbell. I'm about to go to sleep but I have enough time for a few quick questions to narrow down your CPU load issue. Firstly, how many vpeers on each node are you using? How many cores does the machine have. How many tasks? And are you using any aggregates?


Also, was it was a lot faster / less load on 0.9.15?


If it was faster/less load on 0.9, one thing that jumps to mind is that your serialisation overhead might be higher because we don't currently allow short circuiting for messages between peers on the same mode


Since you're pushing large messages around that could easily be increasing overhead d


@lucasbradstreet It was 8 tasks (in/out/functions) as in the input task was a Kafka topic with three partitions then there were three peers on the input.


No aggregation, no windowing.


The main thing to keep in mind is the deserialisation of the messages was to uncompress gzip files and then pass them on in the workflow for processing.


So it was one docker peer with 12 vpeers and the throughput was okay during testing. Once the volume was ramped up we hit the memory/performance issues.


Yesterday I went for one partition per peer so there are now three docker containers deployed, one per partition. That's calmed things down a lot.


There's a few more things I'm going to alter this morning, taking my original heartbeat out as the Onyx 0.10 metric can now serve that (response 200 etc)


Interesting. Same number of nodes / cores? Just split up differently?


But with the information you gave me on Aeron buffers and the calcuation rationale behind, that helped me an awful lot so thank you.


Just split up differently


To be honest from a node maintenance point of view I'm happier with that, at least marathon/mesos will redeploy the container if it dies while the other two keep going.


I'll keep you posted of any interesting developments.


No worries on the buffer calculation rationale. It's our fault for not having it documented yet.


I'll do some testing at some point to make sure backpressuring kicks in nicely in the scenario you're describing. One thing you can do is increase the min and max idle times for the peers


That'll make the peers yield more when things are blocked (the code is written in a non blocking way, and we park the process for a bit when offers fail)


The defaults may be a bit aggressive for situations where number of peers != number of cores


Just to confirm, in the script $NPEERS is the number of virtual peers that are started up, not the actual peer per cpu count. That's the way I've already read it.


Vpeers, I believe is correct


thought so, but just had that niggling doubt in my mind so thought better to confirm than assume 🙂