This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-11-20
Channels
- # beginners (102)
- # boot (23)
- # cljs-dev (1)
- # clojure (52)
- # clojure-canada (7)
- # clojure-korea (2)
- # clojure-poland (1)
- # clojure-russia (35)
- # clojure-spec (39)
- # clojure-uk (5)
- # clojurescript (64)
- # cursive (11)
- # events (1)
- # hoplon (168)
- # lein-figwheel (2)
- # luminus (14)
- # off-topic (47)
- # om (3)
- # om-next (1)
- # onyx (31)
- # quil (4)
- # re-frame (21)
- # spacemacs (1)
- # sql (1)
- # untangled (3)
- # yada (4)
@akiel Yes it does.
The added benefit of writing out results as you go is that reporting intermediate progress to outside systems becomes a breeze. It’s not something bolted on.
@michaeldrogalis Thanks I‘ll try it. With :trigger/on
should I use? Is the :segment
trigger also called on the end of the window?
@akiel All the default triggers are called at the stopping of a task, yes.
You could trigger on a timer every N seconds perhaps, or every 10,000 segments maybe.
@michaeldrogalis Thanks.
@michaeldrogalis: I ran into the shared memory size problem under Docker and I see no way to increase the size in Kubernetes. For what is shared memory used if I'm inside a Docker container with one process only? Can we disable something in case it's not needed?
There’s your peer JVM and the out-of-process media driver
You’ll need to set your heap size on the both JVMs, since the JVM won’t correctly detect available memory in a docker container.
Kubernetes lets you supply the memory allocation amount as an env var through the downward api. Then you can set your heap sizes to a ratio off that
You can also do something like this to correctly detect memory limits using the containers cgroup
CGROUPS_MEM=$(cat /sys/fs/cgroup/memory/memory.limit_in_bytes)
MEMINFO_MEM=$(($(awk '/MemTotal/ {print $2}' /proc/meminfo)*1024))
MEM=$(($MEMINFO_MEM>$CGROUPS_MEM?$CGROUPS_MEM:$MEMINFO_MEM))
JVM_MEDIA_DRIVER_HEAP_RATIO=${JVM_MEDIA_DRIVER_HEAP_RATIO:-0.2}
XMX=$(awk '{printf("%d",$1*$2/1024^2)}' <<< " ${MEM} ${JVM_MEDIA_DRIVER_HEAP_RATIO} “)
Then launch your JVM with "-Xmx${XMX}m"
@gardnervickers Thanks. I currently run the embedded media driver. My JVM is set to 4 GB Heap with matching Kubernetes mem limits. Do I really need the separate media driver? What is the advantage?
Better messaging performance
You don’t absolutely need it though.
Can you please explain, why the messaging performance is better if the media driver runs in a separate JVM?
There’s a great write up on it here. https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation
Also the JVM will allocate for other things besides heap, like a stack for each thread, class metadata, etc.. -Xmx
shouldn’t be 100% of available memory.
@gardnervickers Yes for sure 🙂
Kubernetes can supply a memory volume https://github.com/onyx-platform/onyx-twitter-sample/blob/master/kubernetes/peer.deployment.yaml#L17-L20
Be aware that it’s part of your container memory limit
I’ve hit that before
@gardnervickers Thanks a lot. It works now. 🙂
Fantastic!
@gardnervickers Is this something that's generally good to apply? I'm keeping a file of gotchas and things to remember and wonder if I should keep this for later (the separate JVM for media driver).
Yes, if you’re running Onyx in production you should be running an out-of-process media driver