Fork me on GitHub
#onyx
<
2016-08-22
>
zamaterian11:08:53

Hi guy’s congrats on the great news 🙂

lucasbradstreet11:08:15

Thanks 🙂. We’re really happy to be building out Onyx further

mccraigmccraig13:08:14

[ignore - found the docs now] a deployment-related question - what are the :onyx.bookkeeper/local-quorum-ports ports used for ? presumably since they are part of the :env-config they must be the same for every peer - must also every peer be able to access every other peer's :onyx.bookkeeper/local-quorum-ports ?

Travis14:08:39

@lucasbradstreet: We tried to do a rough estimate of our segment sizes are 1.5K. Most of our values are strings and we just assumed a string size for keywords because I have no idea how many bytes a keyword takes. Also what the map overhead is

Travis14:08:49

scratch that, 2.1K roughly

michaeldrogalis14:08:31

@camechis You may want to measure the disk IOPS on your BookKeeper machines as a sanity check.

lucasbradstreet15:08:44

And how many segments were you processing in total?

Travis15:08:16

@lucasbradstreet: we had 1.7 mill approx loaded up in kafka

Travis15:08:55

Tried many different trigger types but mainly using a timer with discarding

mariusz_jachimowicz16:08:40

In onyx-dashboard I see only long UUIDs. It could be good to have possibility to assign normal name to submited jobs

lucasbradstreet16:08:42

we discussed allowing job ids to be any string, but I believe it’s problem especially if you have multiple users. What we decided instead is that you can place your job name in the job metadata. It would be quite easy to display this name in the dashboard if we decide on a consistent key in the metadata map

mariusz_jachimowicz17:08:35

What about allowing to specify job-name via submit-job- allow to do submit-job(peer-config, job, job-name) ?

michaeldrogalis17:08:09

We added a :job-metadata keep to allow arbitrary user-level data to be attached to a job since :job-name is only one special case of that.

mariusz_jachimowicz18:08:24

Yes but job-name could be more first class citizen property from the api/configuration perspective - many ingredients have names -> workflow items, catalog items, windows.

michaeldrogalis18:08:09

I made that change last night. You probably don't want to track the master branch of Onyx right now.

michaeldrogalis18:08:21

You're likely on a SNAPSHOT of the plugin or Onyx core.

aaelony18:08:30

okay, makes sense. thank-you

aaelony18:08:46

Ok, can confirm that reverting to 0.9.9 works. I was on 0.9.10-SNAPSHOT. thanks

jholmberg18:08:25

Hi there, I’m looking at Aggregation and State Management: http://www.onyxplatform.org/docs/user-guide/latest/aggregation-state-management.html I’m trying to understand the difference between the create-state-update and apply-state-update functions. Does create-state-update only get run once and apply-state-update get run on every subsequent segment for the window?

lucasbradstreet18:08:47

Hi. create-state-update is basically turns a segment into an update to the aggregation state, that is serializable. So you create the state update, and then it is applied to the window + simultaneously written to a state log

Travis19:08:05

@lucasbradstreet: @jholmberg is working with me on the windowing issue, We think we had an epiphany using these two functions to limit what we are storing. Just trying to understand how these work

jholmberg19:08:54

Thanks @lucasbradstreet. Ok, from what you’ve explained. It looks like both fns run each segment. One preps it to be serialized, the other writes the serializable data structure to the state log. I’m thinking I’d put the majority of my business logic in the create-state-update fn where I’d maintain a small amount of state in a map. Then let the apply-state-update “commit” that map to the state log.

lucasbradstreet19:08:28

That’s right, but you would preferably minimise the amount of data in the create-state-update fn, and update the map in the apply-state-fn. Think of it as building a diff in the create-state-update fn

lucasbradstreet19:08:42

This will minimise the amount of data written to BookKeeper in each update

jholmberg19:08:13

I think that makes sense. I’m going to experiment a little on a local test so I understand the relationship a little better.

lucasbradstreet19:08:41

Think of the aggregation of a state machine, which has updates created by create-state-update, applied to it, resulting in a new state (which would be your map). Good luck!

Travis20:08:35

@lucasbradstreet: I think we are definitely on to something here, you mentioned this stuff the other day but it didn’t click to me. I think this will drastically reduce what goes into the journal

mccraigmccraig22:08:53

bookkeeper - if i'm not using aggregations atm, do i need to care about bookkeeper ? can i just set :onyx.bookkeeper/server? and :onyx.bookkeeper/server? like the onyx-template does and forget about it until i want to use aggregations ?

Travis22:08:25

yeah, i don’t think you need bookkeeper until you do any kind of windowing

Travis22:08:53

so the embedded one should be fine

mccraigmccraig22:08:34

if i decide to run a bookeeper server, how do i tell the peer where to contact the bookeeper server ? i can't see anything about bookkeeper hosts or ports in the peer config

mccraigmccraig22:08:48

and would i need to give each peer cluster (i.e. distinct onyx-id) separate bookkeeper base-ledger-dir and base-journal-dir or will the log resources used include an onyx-id path or similar ?

Travis22:08:26

you basically tell it not to run the embedded

Travis22:08:51

:onyx.bookkeeper/server? false

Travis22:08:10

it will then go to zookeeper and look under the ledgers node

Travis22:08:16

to find out where all the bookies are

michaeldrogalis22:08:22

@mccraigmccraig Yes, you can turn it off entirely if you're not using windowing.

Travis22:08:58

Even better then if your not using it

aaelony22:08:04

question... is there anything preventing a flow within a flow for flow-conditions? Currently, a value of X=A gets processed differently than a value of X=B, but it turns out that X=B has many sub-levels and I need to process, e.g. X=B with Y=i differently than X=B with Y=ii, etc... Is it better to have nested flow conditions, i.e. if B flow then if Y=i flow etc..., or is it better to have all possible flow conditions at the top level?

aaelony22:08:10

it might be the case that I don't know about B's payload until B has been processed, so a flow within a flow would be ideal

michaeldrogalis22:08:48

@aaelony Do you mean arbitrarily nesting the syntax for :flow/pred?

michaeldrogalis22:08:02

e.g. [:and [:or [:and ...]]]?

aaelony22:08:19

I mean, extra catalog processing steps with more flow-conditions...

aaelony22:08:34

maybe that's the answer ... 😉

aaelony22:08:44

for example, I have a workflow and I'm about to add new steps, with new flow-conditions...

aaelony22:08:50

so maybe it's not that special

aaelony22:08:08

something like a workflow of [[:in :A][:A :B][:B :D][:A :C][:D :outD][:C :outC]] where :A has flow conditions attached. But I now want to further process :D instead of directly to :outD, and will need new flow-conditions for the various differences in the :D step

michaeldrogalis22:08:27

It sounds like you want to share some data between what happens at A and D?

michaeldrogalis22:08:43

You'd have to pass that data alongside with the segment through A -> B -> D

michaeldrogalis22:08:38

Gotta run for a bit.

aaelony22:08:39

it's more like A is an initial step, but once A is ready, we can tell about whether it goes to B or whether it goes to C. Once it goes to B it can further be processed to D. But once D is known, there will be more flows to come, e.g. [:D :E] [:D :F][:D :G] to be added to the workflow above. It sounds confusing, but this has helped me and I think I know how it will work

aaelony23:08:05

np, cheers