Fork me on GitHub

@lucasbradstreet, could you throw me some hints: How I start a new job with the checkpoint'ed entries from a previously killed job ? Example I have a long running data import job (sql -> datomic) and at some point during the import either the sqldb connection or the transactor timeout. I would like to be able to run a new job (identical catalog) where the old one died. (since its not possible to resume a killed job) - I'm thinking on reading all the checkpointed entries from zookeeper into the new job.


It's possible, but will require making a. Change to onyx sql allow checkpointing to be written to a custom key. A similar ability was added to onyx kafka and onyx datomic


Sounds great, will look into this 🙂 Thnx for the pointer, will probably submit a pr.


Great! We have been meaning to do this for all the plugins, so I will definitely merge it


Haven't forgot about updating the docs with the auto-gen tool, still on my list for this week.


We're thinking of reading the job data-structure from a file (as json) at run-time to submit an onyx job. A thought that occurred to me is that perhaps it is desirable to version the information model of the job, so that if the information model changes in new versions things would still work... e.g. :onyx-version "0.9.11" in the job spec... Any thoughts on this?


@aaelony The information model is (generally) designed to be backwards-compatible. We deprecate things once in a while, but its become relatively rare to make a change to the info model that breaks backwards.


At least for the kafka-plugin, I think there have been a few breaking changes though right?


Recently, yes. We found some things that changed for Kafka 0.9 from 0.8 that we missed.


versioning might help with that, allowing changes to go forward perhaps. What happens if the onyx job is specified as json outside of the program that includes the library? This is a cool idea, but I guess it begs the question of how to ensure the json job might still work over time if the job needs to be re-submitted...


Most of them were more like bug fixes than feature alterations, though.


also, what are you thoughts concerning clojure.spec for the job-spec itself?


might be a lot of work to write it out...


We have most of it done for internal use - we'll get it out once Clojure 1.9 is out I think.


You can get 95% of it by reading the information model and using a code generator. Its < 50 LOC


what do folks typically do currently when they deploy an uberjar? Do they bundle the job specs in the uberjar or do they allow jobs to be read in?


what do you mean by specs?


the job map with keys for :workflow, :catalog, etc..


i bundle mine in the uberjar


and based on main arguments it picks the job i want


all based off the lein template


@camechis, suppose you are running the current uberjar (and job) for a while, and you want to release in new changes to the existing job(s).. How do you handle that?


suppose you are consuming kafka topic X and you now want to consume kafka topic Y ?


haven’t had the chance to experience that yet but i think you essentially stop the job deploy the new version and start a knew one. If they share the same group id then it will pick up where the last one finished


with kafka that is


So, you never have more than one job running on a cluster?


i will eventually have a ton of jobs. Just haven’t made it that far in the process yet


my goal would be to automate the deployment of those


essentially the same job but different configs per customer


so different kafka topics


it is nice to consider the job-map to be standalone as a json file (that can be written by anyone/thing/language), slurped at job-submission time, and run (if it is well-formed, etc..)


at very least, I can imagine a "prod" job from a topic, and a "testing" job from the same topic. Or even another "testing job" from a different topic


just want to think it all through before it happens 😉


I also find myself writing helper functions (poorly) like this:

(defn find-in-workflow [s]
  (->> workflow
       (into #{})
       (filter #(re-find (re-pattern s) (name %)))

(defn find-in-catalog [s]
  (->> catalog
       (mapv :onyx/name)
       (filter #(re-find (re-pattern s) (name %)))
(defn find-in-lifecycles [s]
  (->> lifecycles
       (mapv :lifecycle/task)
       (filter #(re-find (re-pattern s) (name %)))

(defn job-audit [s]
  (let [hits-workflow (into #{} (find-in-workflow s))
        hits-lifecycles (into #{} (find-in-lifecycles s))
        hits-catalog (into #{} (find-in-catalog s))
        results {;; :found {:workflow-hits hits-workflow
                 ;;        :lifecycle-hits hits-lifecycles
                 ;;        :catalog-hits hits-catalog}
                 :in-workflow-but-not-catalog (clojure.set/difference hits-workflow hits-catalog)
                 :in-catalog-but-not-workflow (clojure.set/difference hits-catalog hits-workflow)
                 :lifecycle-hits hits-lifecycles
                 :lifecycle-catalog-intersection (clojure.set/intersection hits-lifecycles hits-catalog)
does anyone do this as well?


I am pretty much following the best practices of using task bundles to compose things


usually what differs for me is configuration


awesome, thanks @camechis


@aaelony We have some quite good facilities for parameterized jobs in the new product. It was a lot easier to do well in an integrated PaaS.


It's been an interesting pattern. We can do a number of things better because we can hide more. It's harder with open source templates because the more you add, the more complexity you stack on for someone using the template.


For event sourcing, can I guarantee message ordering without using a window, maybe by relying on the underlying messaging platform?


@yonatanel Not yet. We'll have some guarantees in the feature for tasks that have exactly one peer assigned each in the future, but currently no. It ends up being hard/impossible to offer good performance and also in-order processing because of the mechanisms needed for failover, and parallelization in general


@michaeldrogalis Would you recommend not using onyx for cqrs/es, or is it actually fine when using a window? I like the way onyx is built but I wonder if I should just use some actors framework or something similar.


@yonatanel Onyx is pretty good at those use cases. Ordering is always going to be hard in a distributed setting. Is there something about windows that's problematic for your scenario?


Not sure. I can't miss a single event and they must be processed in order. I'm guessing I need a way to check the window has all the messages and I'm not sure how to do that. The docs say messages may be dropped when buffers are full, or retried. That was a concern.


@yonatanel How do you know when you've achieved completeness in terms of having "seen" all the events?


@michaeldrogalis Exactly, I don't. When I read directly from Kafka I have at-least-once in-order guarantees.


@yonatanel Right, you have that with Onyx also. I'm talking semantically though. What piece of data indicates that you've seen "everything" you need to see?


@michaeldrogalis I would maybe have to implement my own sequencing per aggregate. But if Onyx has at-least-once and in-order I won't need that.


@yonatanel I would recommend using a window with :onyx/group-by to force all "like" data to go to the same local process, even in a distributed setting, then use a predicate trigger to flush the window when it "has" all the data.


A predicate trigger is just a function that gets the window contents, then returns true or false if it should flush the window, or do whatever trigger/sync says to do with it.


@michaeldrogalis Thanks! I'll experiment further until I have better questions.


@yonatanel Sure, happy to help. 🙂