Fork me on GitHub
#onyx
<
2017-01-06
>
isaac04:01:31

How can I get job from job-id after submitted?

michaeldrogalis04:01:42

@isaac I would recommend maintaining your own copy of the job data. It’s available in ZooKeeper, but it’s not meant to be stored there permanently.

len12:01:36

Hi, I have a design Q

len13:01:00

We have the datomic plugin connected and we pickup changes which is great

len13:01:52

However there are many different things that we want to hang off the txn's, e.g. send-email or generate-billing etc etc

len13:01:22

should we have one job with filters or many jobs each connecting to datomic

len13:01:50

any thoughts or ideas welcome

michaeldrogalis16:01:22

The latter is preferable most of the time. If an unhandled user-level exception occurs in one job, it won’t disrupt the others from making progress.

len16:01:48

that of course make sense

len16:01:13

and having one job start others - one datomic listner starting others ?

michaeldrogalis16:01:42

@len Sorry, a bit confused. You’re asking about one job receiving some information, and turning around to start other jobs?

len17:01:24

yes just wrapping my head around how to have many things happen in onyx

michaeldrogalis17:01:25

@len Would recommend having N long running jobs for each thing you’re doing.

michaeldrogalis17:01:39

Rather than launching one-off jobs.

len17:01:19

is having a lot of listeners to datomic a concern ?

len17:01:04

probably not - just thinking aloud 🙂

len17:01:53

and in general is havig a job start another job an anti pattern ?

michaeldrogalis17:01:27

It’s not, but operationally I think you’ll have an easier time not doing that.

len17:01:34

so in general have workflow inputs and outputs connected to things external to the job

michaeldrogalis17:01:00

Kafka is a pretty good in-between since you can backlog all of your intermediate work.

aengelberg19:01:51

Does the :onyx/batch-size setting of a task signify the amount of segments passed to that task? Or the amount of segments that task passes to downstream tasks? I'm curious what happens if you set varying batch sizes for different tasks in a job.

michaeldrogalis19:01:31

@aengelberg It controls the read-factor. I believe we have an open issue for making a separate parameter to control the write-factor.

michaeldrogalis19:01:36

It’s expected that multiple tasks within a job will have different :onyx/batch-size values, but sometimes you don’t need that level of tuning and using the same value throughout works alright.

michaeldrogalis20:01:52

To be more specific, :onyx/batch-size controls the number of segments that will be read from that peer’s Aeron channel before it begins processing them. It will wait at most :onyx/batch-timeout milliseconds before giving up and processing them in the case where it doesn’t reach :onyx/batch-size.

seako20:01:39

i’m trying to find docs on the onyx http api (for listing what jobs are running and such) my search skills seem weak today. can anyone point me in the right direction?

seako20:01:31

thank you very much!