This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (4)
- # bangalore-clj (1)
- # beginners (28)
- # boot (16)
- # clara (4)
- # cljs-dev (28)
- # cljsrn (63)
- # clojure (136)
- # clojure-berlin (7)
- # clojure-gamedev (1)
- # clojure-nl (4)
- # clojure-russia (47)
- # clojure-sg (8)
- # clojure-spec (39)
- # clojure-uk (132)
- # clojurescript (66)
- # clojurex (5)
- # clojutre (2)
- # code-reviews (14)
- # core-logic (1)
- # cursive (13)
- # datavis (1)
- # datomic (35)
- # dirac (1)
- # editors (1)
- # hoplon (46)
- # jobs (1)
- # lambdaisland (5)
- # lein-figwheel (1)
- # mount (10)
- # off-topic (3)
- # om (67)
- # onyx (54)
- # planck (7)
- # proton (15)
- # protorepl (1)
- # re-frame (174)
- # ring (4)
- # ring-swagger (3)
- # specter (14)
- # untangled (15)
[ignore - found the docs now] a deployment-related question - what are the
:onyx.bookkeeper/local-quorum-ports ports used for ? presumably since they are part of the
:env-config they must be the same for every peer - must also every peer be able to access every other peer's
@lucasbradstreet: We tried to do a rough estimate of our segment sizes are 1.5K. Most of our values are strings and we just assumed a string size for keywords because I have no idea how many bytes a keyword takes. Also what the map overhead is
@camechis You may want to measure the disk IOPS on your BookKeeper machines as a sanity check.
In onyx-dashboard I see only long UUIDs. It could be good to have possibility to assign normal name to submited jobs
we discussed allowing job ids to be any string, but I believe it’s problem especially if you have multiple users. What we decided instead is that you can place your job name in the job metadata. It would be quite easy to display this name in the dashboard if we decide on a consistent key in the metadata map
What about allowing to specify
submit-job- allow to do
submit-job(peer-config, job, job-name) ?
We added a
:job-metadata keep to allow arbitrary user-level data to be attached to a job since
:job-name is only one special case of that.
Yes but job-name could be more first class citizen property from the api/configuration perspective - many ingredients have names -> workflow items, catalog items, windows.
I made that change last night. You probably don't want to track the master branch of Onyx right now.
Hi there, I’m looking at Aggregation and State Management: http://www.onyxplatform.org/docs/user-guide/latest/aggregation-state-management.html I’m trying to understand the difference between the create-state-update and apply-state-update functions. Does create-state-update only get run once and apply-state-update get run on every subsequent segment for the window?
Hi. create-state-update is basically turns a segment into an update to the aggregation state, that is serializable. So you create the state update, and then it is applied to the window + simultaneously written to a state log
Thanks @lucasbradstreet. Ok, from what you’ve explained. It looks like both fns run each segment. One preps it to be serialized, the other writes the serializable data structure to the state log. I’m thinking I’d put the majority of my business logic in the create-state-update fn where I’d maintain a small amount of state in a map. Then let the apply-state-update “commit” that map to the state log.
That’s right, but you would preferably minimise the amount of data in the create-state-update fn, and update the map in the apply-state-fn. Think of it as building a diff in the create-state-update fn
I think that makes sense. I’m going to experiment a little on a local test so I understand the relationship a little better.
Think of the aggregation of a state machine, which has updates created by create-state-update, applied to it, resulting in a new state (which would be your map). Good luck!
@lucasbradstreet: I think we are definitely on to something here, you mentioned this stuff the other day but it didn’t click to me. I think this will drastically reduce what goes into the journal
bookkeeper - if i'm not using aggregations atm, do i need to care about bookkeeper ? can i just set
:onyx.bookkeeper/server? like the onyx-template does and forget about it until i want to use aggregations ?
if i decide to run a bookeeper server, how do i tell the peer where to contact the bookeeper server ? i can't see anything about bookkeeper hosts or ports in the peer config
and would i need to give each peer cluster (i.e. distinct
onyx-id) separate bookkeeper
base-journal-dir or will the log resources used include an
onyx-id path or similar ?
@mccraigmccraig Yes, you can turn it off entirely if you're not using windowing.
question... is there anything preventing a flow within a flow for
Currently, a value of
X=A gets processed differently than a value of
X=B, but it turns out that
X=B has many sub-levels and I need to process, e.g.
Y=i differently than
Y=ii, etc... Is it better to have nested flow conditions, i.e. if
B flow then if
Y=i flow etc..., or is it better to have all possible flow conditions at the top level?
it might be the case that I don't know about B's payload until B has been processed, so a flow within a flow would be ideal
for example, I have a workflow and I'm about to add new steps, with new flow-conditions...
something like a workflow of [[:in :A][:A :B][:B :D][:A :C][:D :outD][:C :outC]] where :A has flow conditions attached. But I now want to further process
:D instead of directly to :outD, and will need new flow-conditions for the various differences in the
it's more like A is an initial step, but once A is ready, we can tell about whether it goes to B or whether it goes to C. Once it goes to B it can further be processed to D. But once D is known, there will be more flows to come, e.g. [:D :E] [:D :F][:D :G] to be added to the workflow above. It sounds confusing, but this has helped me and I think I know how it will work