This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-02-07
Channels
- # aleph (3)
- # aws (7)
- # beginners (117)
- # boot (119)
- # cider (2)
- # cljs-dev (3)
- # clojure (193)
- # clojure-austin (1)
- # clojure-dusseldorf (4)
- # clojure-finland (5)
- # clojure-france (5)
- # clojure-italy (7)
- # clojure-portugal (1)
- # clojure-russia (204)
- # clojure-serbia (5)
- # clojure-spec (31)
- # clojure-uk (64)
- # clojurescript (288)
- # community-development (9)
- # core-async (54)
- # cursive (8)
- # datascript (18)
- # datomic (26)
- # dirac (8)
- # emacs (26)
- # figwheel (1)
- # hoplon (16)
- # jobs (2)
- # jobs-discuss (4)
- # juxt (1)
- # lein-figwheel (4)
- # leiningen (14)
- # london-clojurians (2)
- # lumo (17)
- # off-topic (44)
- # om (63)
- # om-next (2)
- # onyx (26)
- # perun (14)
- # planck (5)
- # portland-or (34)
- # proton (2)
- # protorepl (8)
- # quil (1)
- # re-frame (6)
- # reagent (16)
- # remote-jobs (4)
- # ring (7)
- # ring-swagger (10)
- # rum (1)
- # untangled (2)
Is the take-segments! being called at all?
If the take-segments! is being called, then your problem is that this job will never end because there is no end-tx
and it’s a log / streaming input, so you will never get to a point where a :done
is written to the output channel, and thus take-segments! will never yield
try taking from the channel with a regular core.async take function (`<!!`, etc)
so im fairly sure this is in the docs (but im just missing it right now) , if i have two jobs that have 10 tasks each (assume no minimum peers are set) and I only have 10 peers , will the tasks be assigned to peers as soon as they are submitted (by calling submit job) or will they only be assigned when the job actually runs (i.e it receives input from its input tasks)
@rc1140 The job starts immediately on submit-job if there are enough peers to satisfy the job, it won’t wait for input before starting the tasks.
As for your specific scenario, it depends on which job scheduler you choose. http://www.onyxplatform.org/docs/user-guide/latest/#scheduling
Two jobs with 10 tasks each = 20 tasks. You have 10 peers, so only one job can start.
but the jobs read their input from a kafka chan which is what actually triggers the start of the jobs workflow
thanks michaeldrogalis and lucasbradstreet! the finally
was never being called, so it was blocking in taking-segments!
, but you're right that since the job never completes with a :done
(because i didn't set an :datomic/log-end-tx
, i presume), it never returned. so i added a loop
with <!!
as suggested and it seems to be working. thanks again!
@michaeldrogalis the onyx docs mention that you should try keep the number of vpeers close to the number of cores on the machine , has any adverse effects been observed when you dont do that
@jeremy Hooray! 🙂 Let us know if you get stuck again.
@rc1140 Yeah you’re going to get serious thread contention.
Onyx will end up context switching so frequently that almost no progress will get made.
is there a better way to approach this , i.e. the starting and stopping of the scheduled jobs
@rc1140 If it’s a job with only batch inputs, the job will stop by itself when it’s processed all the data. If it’s a streaming job, you must bring it down with onyx.api/kill-job
, otherwise it presumes there’s more input coming.
ok so effectively all i need to do is append a :done to my input data and it will kill itself after processing the job
@rc1140 Correct, if you’re using core.async or kafka. Things like SQL, S3 and some versions of the Datomic input plugin, will terminate on their own.