This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # aws (1)
- # aws-lambda (1)
- # beginners (27)
- # boot (16)
- # cider (1)
- # clara (54)
- # cljs-dev (4)
- # cljsjs (8)
- # cljsrn (25)
- # clojure (148)
- # clojure-dev (2)
- # clojure-finland (1)
- # clojure-france (18)
- # clojure-italy (10)
- # clojure-nl (3)
- # clojure-russia (27)
- # clojure-sg (2)
- # clojure-uk (17)
- # clojurebridge (6)
- # clojurescript (70)
- # core-async (1)
- # css (6)
- # cursive (35)
- # data-science (3)
- # datomic (22)
- # events (4)
- # jobs (18)
- # jobs-discuss (14)
- # leiningen (4)
- # lumo (22)
- # off-topic (20)
- # om (5)
- # om-next (1)
- # onyx (47)
- # pedestal (107)
- # re-frame (43)
- # reagent (1)
- # ring (2)
- # ring-swagger (2)
- # rum (18)
- # sql (15)
- # unrepl (4)
- # vim (61)
- # yada (3)
Thanks! All of your collective encouragement and assistance has been pivotal in getting this to happen - not to mention the elegant work y’all have done! 🙂
Go team Clojure 😄
when decoupling your virtual peers from job sumission, what are the best practices for putting my job’s code ?
as in, i know that in onyx jobs are technically data, but you do need to provide an
:fn, that’s in the classpath of the peers
so the peers cannot be designed in a job-agnostic way (or otherwise, you would need to upload a .jar and load that on the fly)
we just build an uberjar with all jobs' code in and run peers from that
but that does require all the peers to be restarted when you change yours jobs, right ?
yes, all peers get restarted when i change job code... but that's not much different from all my api instances getting restarted when i change code there
i guess if i was running jobs from multiple projects on one onyx cluster it would be different
as in, make one uberjar that contains both the job code and the ability to run the peers
is that what you’re doing as well @mccraigmccraig ?
yes @lmergen - i have one uberjar with different entrypoints to run peers and manage a job
i guess i’ve been tainted by too much hadoop to think that this decoupling was necessary
perhaps onyx will do job-jar distribution too at some point in the future... you'll have to ask mike or lucas about that tho
@lmergen If your jobs aren’t being built up dynamically, what is the advantage you see in putting the job data and the code in different jars?
Sounds like you changed your mind already, but was just curious.
i think i was expecting the ability to submit new jobs (as in, newly written) in an already-running cluster
Okiedoke. I think you’re on the right track now.
i now realise that this is an anti-pattern, since it makes reasoning about the peer processors much more difficult
You can call onyx.api/submit-job when the cluster is running as many times as needed. Is that what you meant?
Oh - yep. Jobs, once submitted, are immutable. You are correct.
There is an Apache project who’s flagship feature is the ability to edit similar streaming structures on the fly.
That seems rather insane to reason about.
The name escapes me at the moment. 😕 But anyway, now you’ve got the idea. 🙂
that’s the project of the people from https://data-artisans.com
which is all about “your data doesn’t stop flowing”, and “real time data applications” 🙂
@lmergen Onyx has an adaptation of their streaming engine and is suitable for the same type of work, DataFlow-style unified stream/batch processing.
yeah i’m aware of that, i was referring to @michaeldrogalis comment about some apache project that allows you to edit streaming structures on the fly
@lmergen I think it was Gear Pump, though I haven’t checked. Flink’s submissions are also immutable, same as Onyx’s
It’s true though, Apache has a zillion of these.