This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-11-17
Channels
- # bangalore-clj (4)
- # beginners (60)
- # boot (63)
- # cider (2)
- # cljs-dev (22)
- # cljsrn (3)
- # clojars (32)
- # clojure (133)
- # clojure-gamedev (1)
- # clojure-germany (17)
- # clojure-italy (1)
- # clojure-russia (11)
- # clojure-serbia (16)
- # clojure-spec (35)
- # clojure-uk (75)
- # clojurebridge (1)
- # clojurescript (83)
- # community-development (25)
- # core-async (43)
- # cursive (15)
- # datomic (28)
- # emacs (2)
- # fulcro (108)
- # graphql (5)
- # hoplon (15)
- # lein-figwheel (6)
- # leiningen (39)
- # lumo (106)
- # new-channels (1)
- # off-topic (4)
- # om (26)
- # om-next (53)
- # onyx (46)
- # other-languages (2)
- # perun (1)
- # protorepl (5)
- # re-frame (13)
- # ring (18)
- # ring-swagger (1)
- # rum (6)
- # shadow-cljs (82)
- # spacemacs (19)
- # specter (5)
- # sql (3)
- # test-check (31)
- # unrepl (12)
- # untangled (2)
- # vim (109)
@lmergen onyx.api/job-ids has been converted to onyx.api/job-ids-history, so you can get a full history of a job-by-name. I’ve also modified job-snapshot-coordinates, and it’ll now walk back through the history until it finds a job with a snapshot.
New onyx.api function that will be in 0.12:
(onyx.api/job-state zk-addr tenancy-id job-id)
: plays back the log for that tenancy-id and returns a map describing the job state.
eg. {:cluster-alive? true, :peer-allocations {:inc [#uuid "18592939-6597-719d-9cc3-0981e992657b"], :out [#uuid "e271deab-0e82-05b4-f02e-916d74c54a4d"], :in [#uuid "1491fbe3-bc39-549a-86fa-2580e5e3a420"]}, :peer-sites {#uuid "18592939-6597-719d-9cc3-0981e992657b" {:address localhost, :port 40199}, #uuid "e271deab-0e82-05b4-f02e-916d74c54a4d" {:address localhost, :port 40199}, #uuid "1491fbe3-bc39-549a-86fa-2580e5e3a420" {:address localhost, :port 40199}}, :allocation-version 4, :job-status :running}
oh wow apparently i was implementing exactly this functionality at the same time haha
Which part, the job-state? or onyx.api/job-ids?
Ah, cool. Give it a go and let me know what you think. I’m happy to make changes if there’s anything else you want in it before 0.12 is released.
Right, yeah. I realised with just a little bit more data in ZK we could do the job-history better. We’re going to ditch most of our custom datomic reconciliation and just keep the job-status detection in datomic (mostly for quick lookup because otherwise we have to play back the log).
You can try the snapshot at: 0.12.0-20171117.063935-35
. New fns https://github.com/onyx-platform/onyx/blob/master/src/onyx/api.clj#L279 and https://github.com/onyx-platform/onyx/blob/master/src/onyx/api.clj#L377
I need to create an onyx-example for these features now.
Hey folks, I’m trying to figure out how onyx-kafka 0.11 works. Does it commit offsets at any point in time? I see consumer’s auto commit is off by default and .commit
isn’t called anywhere.
Asking because if I kill a job and then re-submit it with a new job-id it will re-process messages the killed job have processed already.
Restarting peers without killing a job works as expected (job proceeds from latest processed offset)
since onyx 0.10, it allows you to 'checkpoint' jobs, and (re-)start jobs from a certain checkpoint
@lmergen do you mean these? http://www.onyxplatform.org/docs/user-guide/0.12.x/#resume-point
okay… it looks like this requires more manual work when re-submiting jobs (e.g. figuring out previous job id)
so you'll not only resume the kafka consumer at a certain point, you'll also know exactly where your output was left at that same moment
perhaps you should wait a bit until @lucasbradstreet or @michaeldrogalis come online
@jetmind @lmergen's description is accurate. The changes discussed slightly above by @lucasbradstreet automate looking up resume point. It's a little manual effort at the moment, but it's really not more than a few lines.
And yes, a lot has changed off 0.9.
Hi all! im trying upgrade from 0.10 to 0.12 but got these Runtime exception below when run my tests or try load a namespace with onyx-local-rt.api
CompilerException java.lang.RuntimeException: Unable to resolve symbol: pos-int? in this context, compiling:(onyx/spec.cljc:18:1)
local rt requires onyx 1.9 because it leans on spec.
How do resume points interact with grouping and flux policy? Can I change the number of virtual peers assigned to a grouped task between two jobs? It seems the :slot-migration
key might be involved, but can't find documentation and it seems :direct
is the only valid value? Does that mean we must always have the same # of virtual peers unless :continue
flux policy is set through resume points?
That is to be determined. At the moment we rely on there being the same number each time, because doing anything else would require repartitioning state.
Would it be enough to repartition it on a migration? That’s going to be a lot easier than repartitioning it on auto-scaleup
It's not a required feature for us at this point, but for the use-cases I can think of, migration would be fine.
Great. It won’t be all that technically hard to implement, but we don’t have the resources to do it right now unless it’s sponsored.