This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-02-27
Channels
- # bangalore-clj (1)
- # beginners (11)
- # boot (23)
- # business (2)
- # cider (43)
- # cljs-dev (65)
- # cljsjs (17)
- # cljsrn (4)
- # clojure (144)
- # clojure-austin (4)
- # clojure-berlin (3)
- # clojure-finland (4)
- # clojure-nl (2)
- # clojure-russia (13)
- # clojure-spec (73)
- # clojure-uk (42)
- # clojured (2)
- # clojurescript (166)
- # core-matrix (4)
- # cursive (24)
- # datomic (39)
- # dirac (8)
- # hoplon (97)
- # jobs (2)
- # jobs-rus (11)
- # juxt (16)
- # lein-figwheel (8)
- # leiningen (1)
- # luminus (5)
- # lumo (46)
- # off-topic (1)
- # om (39)
- # onyx (43)
- # overtone (1)
- # pedestal (3)
- # perun (6)
- # play-clj (3)
- # protorepl (14)
- # re-frame (21)
- # reagent (25)
- # remote-jobs (1)
- # ring (1)
- # robots (4)
- # rum (13)
- # specter (5)
- # untangled (72)
- # yada (62)
Reading on the environment notes in the user guide I should be setting :onyx.messaging/bind-addr
to an actual interface IP. How do you approach that on the likes of Mesos when you don't know what the ip is going to be? Will localhost
suffice?
@jasonbell solved this with a little shell scripting and I believed I used host networking
@jasonbell that will be the address published to ZooKeeper and discovered by other peers. It should be routeable from every other peer in your Onyx cluster. For Kubernetes, we use the downward API to discover the pod/container IP and set it there. I'm not familiar with Mesos/Marathon but I believe they offer something similar.
Okay two more quick questions. Firstly, steps to control the docker container getting totally exhausted
17-02-27 14:51:43 8c8779428f59 INFO [onyx.messaging.aeron.status-publisher:33] - Closing status pub. {:completed? false, :src-peer-id #uuid "3027650e-7c58-a6e3-d10c-2ad8c6e5ae57", :site {:address "localhost", :port 40200, :aeron/peer-task-id nil}, :blocked? nil, :pos 0, :type :status-publisher, :stream-id 0, :dst-channel "aeron:udp?endpoint=localhost:40200", :dst-peer-id #uuid "adefe2ab-9ce1-3939-fca9-97aad3ba15e9", :dst-session-id 1453547509, :short-id 0, :status-session-id nil}
Warning: space is running low in /dev/shm (shm) threshold=167,772,160 usable=10,391,552
and secondly are there any notes on using JConsole to inspect the Onyx peer cluster while it's running.@jasonbell 0.10 forwards all metrics into JMX.
17-02-27 16:02:46 f05509f2c393 INFO [onyx.http-query:282] - Starting http query server on 0.0.0.0:18083
17-02-27 16:02:46 f05509f2c393 INFO [onyx.monitoring.metrics-monitoring:82] - Started Metrics Reporting to JMX.
Is there specific port I should be looking at? jconsole doesn't connect or do anything when I try 18083yes but I don't see any mention of an exposed JMX port in the Docker setup, do I setup one myself?
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9010
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
@jasonbell Did a connection to 9010 work?
Yeah, you need to open the standard JMX ports for Flight Record/JConsole or whatever else you’re using to talk to it.
That warning above is troubling, if you’re running in Mesos you’ll want to allocate some space at /dev/shm
for the Aeron media driver.
I think Mesos/Marathon has as —shm-size
option?
@jasonbell There are some caveats with running JMX monitoring inside a container too
Memory_ObjectPendingFinalizationCount 0
Memory_HeapMemoryUsage_committed 966787072
Memory_HeapMemoryUsage_init 264241152
Memory_HeapMemoryUsage_max 966787072
Memory_HeapMemoryUsage_used 820930136
Memory_NonHeapMemoryUsage_committed 152645632
Memory_NonHeapMemoryUsage_init 2555904
Memory_NonHeapMemoryUsage_max -1
Memory_NonHeapMemoryUsage_used 151136728
You’ll need to set -Djava.rmi.server.hostname=
to something that JConsole/FlightRecorder can resolve to the container.
These are my settings for remote monitoring inside a Kubernetes cluster
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.rmi.port=1099
-Djava.rmi.server.hostname=127.0.0.1
Kubernetes has a port-forward
mechanism that will setup a VPN and allow container ports to be connected to at 127.0.0.1
.