This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-08-29
Channels
- # admin-announcements (2)
- # beginners (20)
- # boot (139)
- # cider (6)
- # clara (1)
- # cljs-dev (7)
- # cljsrn (4)
- # clojure (160)
- # clojure-berlin (1)
- # clojure-canada (6)
- # clojure-gamedev (1)
- # clojure-japan (7)
- # clojure-russia (14)
- # clojure-spec (90)
- # clojure-uk (10)
- # clojurescript (73)
- # clojutre (1)
- # conf-proposals (8)
- # crypto (67)
- # cursive (9)
- # datomic (6)
- # editors-rus (1)
- # events (1)
- # figwheel (6)
- # funcool (2)
- # hoplon (19)
- # instaparse (37)
- # kekkonen (4)
- # lein-figwheel (2)
- # leiningen (5)
- # luminus (1)
- # off-topic (1)
- # om (10)
- # onyx (60)
- # protorepl (2)
- # re-frame (81)
- # reagent (10)
- # ring-swagger (15)
- # rum (6)
- # specter (17)
- # test-check (10)
- # uncomplicate (31)
- # untangled (12)
- # yada (6)
@michaeldrogalis Was thinking of writing a k-means|| implementation on top of it but wondering if it's worth it if next release will make it easier, as @mlimotte mentioned,
@brandoff I'm admittedly not familiar with how to most efficiently implement that, or what primitives would be required. If local iteration is what you need, Onyx can do that now. If records need to efficiently flow through repeated tasks, you'll want to wait.
Gotta run, happy to answer questions when I'm back.
@brandoff wouldn't you be using a window of data, and running kmeans on each window separately?
@lucasbradstreet i don't suppose you've coordinated ext aeron and an onyx jar with systemd yet, have you?
Afraid I haven’t. We created scripts that are started up by s6 in alpine linux on Docker, but no systemd yet
ok, cool 🙂
i'm pretty close, i think
bit of an interesting one, @lucasbradstreet
this appears to be two hosts both running a task set to max-peers 1
i've just made sure they're all using the same tenancy-id -- they are (it's based on our git sha)
i've also made sure that the mechanism that decides which instance will submit jobs ensures just a single instance (it does)
what do you think could cause metrics from two instances for a single task like that?
You didn't end up splitting your jobs up, right? I remember some discussion about two jobs both with read-log tasks (sanity question)
i'm summing by host
on datadog
is it possible for input tasks to report these metrics from multiple hosts?
no, it's all on one read-log
Ok. Next mostly likely was that you had submitted the job twice, but it sounds like that isn't the case either
actually, this explains why we sometimes see 50ms flat lines and 100ms flatlines; multiple instances
oh, certainly submitted more than once, but killed as well
i've totally forgotten how to ask ZK what it's currently got. can you aid me on that, or should i RTM?
You mean for the current Onyx allocation? Easiest would be the dashboard if you can get it up
ok. i'll give that a go
there's not perhaps a shell command i can issue to ZK directly?
if i know the tenancy id
Not really because you have to play back the Onyx log using our code to actually get a view of the replica back
There will be a web service you can switch on that will make this trivial in the next release
what version will that be?
we're still on 0.9.6
0.9.10
awesome
be keen to try it out
wow. some pretty big fixes since 0.9.6. will defo get upgraded
I'm about to work with exception handling with flow conditions in Onyx, so I'm reading the docs for clues. The docs say :flow/thrown-exception? true
set in a flow condition will cause that flow to be activated in a failure case. But it does not say whether the flow condition will ONLY be activated in a failure case. i.e. does a flow condition with :flow/thrown-exception? true
also get called with successful segments?
@aengelberg It does not get called for successful segments, no.
thanks
question regarding the onyx-dashboard... The Job Visualization
panel is very nice, is there a way to make it shrink or scroll to see the entire workflow when the workflow doesn't fit in the allotted space?
Hey, where are the docs for how to actually provision an onyx cluster in production? Run against mesos, etc?
It'd kind of up to you to figure it out based on your platform but the it's not a lot to it
We don't have anything public for Mesos right now but the onyx-Twitter-sample has some kubernetes manifests. It's fairly straight forward.
Through Zookeeper
I having been deploying on mesos for a month or so now. Only thing I haven't worked out is how I want to manage jobs
@smw You can deploy a web server in lib-onyx to track the log and act as if it were a master. https://github.com/onyx-platform/lib-onyx#replica-http-server
Anytime ^^