This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-03-01
Channels
- # bangalore-clj (1)
- # beginners (16)
- # bigdata (1)
- # boot (16)
- # cljs-dev (79)
- # cljsjs (37)
- # cljsrn (62)
- # clojars (1)
- # clojure (260)
- # clojure-austin (3)
- # clojure-dev (3)
- # clojure-dusseldorf (3)
- # clojure-italy (1)
- # clojure-russia (32)
- # clojure-serbia (2)
- # clojure-spec (8)
- # clojure-uk (146)
- # clojure-ukraine (16)
- # clojurescript (66)
- # cursive (27)
- # datomic (57)
- # dirac (124)
- # emacs (10)
- # hoplon (12)
- # juxt (6)
- # keechma (6)
- # lein-figwheel (18)
- # leiningen (6)
- # lumo (51)
- # off-topic (1)
- # om (66)
- # onyx (41)
- # perun (1)
- # play-clj (1)
- # protorepl (9)
- # re-frame (20)
- # reagent (11)
- # ring (4)
- # ring-swagger (10)
- # rum (22)
- # specter (8)
- # sql (2)
- # test-check (5)
- # untangled (27)
- # yada (29)
@jasonbell, Curious on how your managing your jobs in Mesos ? Are you submitting them through marathon or some other means ?
@jasonbell happy to talk to you about your CPU load issue when you come back on
@lucasbradstreet Is there docs on what is required to move to 0.10 ?
Let me know if you hit any issues that aren’t described there.
@lucasbradstreet Is there a chance of a local fs durable storage check pointing ?
You can set it to checkpoint in ZooKeeper if you aren't checkpointing a lot of data. I'm not very inclined to checkpoint to a local fs since it's not useful for multi node use and there's the ZooKeeper impl for testing
gotcha, We have a use case were we are doing a single node ( kind of an internal process ). To reuse the process at much much less scale.
Mmm, I can see that it could be useful. I’d accept a PR. It wouldn’t be too hard to implement.
cool, will take a look when i get time. Its definitely not a normal use case but trying to reuse our ingest that we have at scale to also use it in an appliance like setting for much smaller stuff.
Yeah, it’s nice to be able to scale up like that
Good morning @jasonbell. I'm about to go to sleep but I have enough time for a few quick questions to narrow down your CPU load issue. Firstly, how many vpeers on each node are you using? How many cores does the machine have. How many tasks? And are you using any aggregates?
Also, was it was a lot faster / less load on 0.9.15?
If it was faster/less load on 0.9, one thing that jumps to mind is that your serialisation overhead might be higher because we don't currently allow short circuiting for messages between peers on the same mode
Since you're pushing large messages around that could easily be increasing overhead d
@lucasbradstreet It was 8 tasks (in/out/functions) as in the input task was a Kafka topic with three partitions then there were three peers on the input.
The main thing to keep in mind is the deserialisation of the messages was to uncompress gzip files and then pass them on in the workflow for processing.
So it was one docker peer with 12 vpeers and the throughput was okay during testing. Once the volume was ramped up we hit the memory/performance issues.
Yesterday I went for one partition per peer so there are now three docker containers deployed, one per partition. That's calmed things down a lot.
There's a few more things I'm going to alter this morning, taking my original heartbeat out as the Onyx 0.10 metric can now serve that (response 200 etc)
Interesting. Same number of nodes / cores? Just split up differently?
But with the information you gave me on Aeron buffers and the calcuation rationale behind, that helped me an awful lot so thank you.
To be honest from a node maintenance point of view I'm happier with that, at least marathon/mesos will redeploy the container if it dies while the other two keep going.
No worries on the buffer calculation rationale. It's our fault for not having it documented yet.
I'll do some testing at some point to make sure backpressuring kicks in nicely in the scenario you're describing. One thing you can do is increase the min and max idle times for the peers
That'll make the peers yield more when things are blocked (the code is written in a non blocking way, and we park the process for a bit when offers fail)
See http://www.onyxplatform.org/docs/cheat-sheet/latest/#peer-config/:onyx.peer/idle-min-sleep-ns
The defaults may be a bit aggressive for situations where number of peers != number of cores
Sleep time