This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-10-03
Channels
- # aws (1)
- # bangalore-clj (3)
- # beginners (3)
- # boot (9)
- # business (1)
- # cljs-dev (72)
- # cljsjs (7)
- # clojure (86)
- # clojure-austin (1)
- # clojure-belgium (4)
- # clojure-brasil (14)
- # clojure-conj (3)
- # clojure-dev (10)
- # clojure-italy (4)
- # clojure-poland (14)
- # clojure-russia (36)
- # clojure-spec (144)
- # clojure-uk (50)
- # clojurebridge (1)
- # clojurescript (160)
- # clr (2)
- # core-async (8)
- # cursive (56)
- # datomic (34)
- # devcards (3)
- # emacs (2)
- # ethereum (1)
- # events (3)
- # hoplon (21)
- # jobs (2)
- # leiningen (9)
- # luminus (3)
- # off-topic (1)
- # om (26)
- # onyx (42)
- # pedestal (29)
- # protorepl (1)
- # re-frame (43)
- # reagent (26)
- # rethinkdb (4)
- # ring-swagger (4)
- # spacemacs (5)
- # specter (4)
- # untangled (102)
- # vim (43)
- # yada (10)
@lucasbradstreet just confirming that for :onyx.messaging/bind-addr localhost is fine in single-node mode?
@robert-stuttaford: yes, that's fine
thanks!
@lucasbradstreet -- would love to know your thoughts on the terraform stuff i shared on saturday, particularly around highstorm ansible stuff
I'm interested, but haven't had a look yet
cool 🙂 of particular note is the use of systemd to handle aeron + peers + jobs processes
If i wanted onyx to operate over an unbounded stream of data and output the largest number/max it had seen in the stream what would the recovery process look like if the process crashed? Would it need to replay all the data? Is there a mechanism to persist the current max somewhere? Example input and output. 1, 2, 1, 3, 2 -> onyx -> 1, 2, 2, 3, 3 Naively i could see sandwhiching onyx between two kafka streams and re-reading the last written max in case of a failure but i’m assuming their is a native mechanism for this.
@drewverlee Onyx recovers from the last successfully acknowledged message and plays the stream forward from that point. It's able to determine what the max, or whatever aggregate you're using, is by replaying another log that it uses specifically for incremental state updates. See http://www.onyxplatform.org/docs/user-guide/0.9.11/#aggregation-state-management for an explanation.
I didn't get the point of windows too much, until I read https://www.oreilly.com/ideas/why-local-state-is-a-fundamental-primitive-in-stream-processing now my mind is pretty blown.
@michaeldrogalis, Thanks, I read over the docs but i’m still a bit unsure of the details. can you give me an example of a “changelog update”, I have never been sure if this changelog contains the the messages-that-were-processed or if it contained meta information about the job itself. for instance, if i’m reading in values: 1, 2, 1*crashes, 3, 2 -> onyx peer -> 1, 2, 2, 3, 3 And the onyx peer crashes after reading the second 1. Then does the changelog contain [1, 2, 2] ? or maybe just [2] (the current max).
@dominicm that looks like a great source, i’ll read it over right now.
@dominicm We had a very fun time implementing them.
@michaeldrogalis I remember reading your blog post on them. I was impressed, but unsure how I could use them for my purposes. Then I read the oreilly thing.
@drewverlee It contains details to apply a function to advance the state from one entry to the next. This file contains the state transitions for the built-in aggregates: https://github.com/onyx-platform/onyx/blob/0.9.x/src/onyx/windowing/aggregation.cljc
@drewverlee It might help to think of the aggregations as a reduction over the sequence of segments flowing into the window, where the “accumulator” piece of the reduction is checkpointed to durable storage every time an extent is triggered.
@drewverlee It contains [1 2], per your example.
Correct. We're going to support another kind of state recording in the future too, but incremental snapshots is the existing mechanism recording state.
hey @michaeldrogalis 🙂 hope you and the gang are doing well. how’s startup life?
hah, yes, paperwork. ain’t that fun
Joking aside, pretty awesome. I'm itching to show our new product ontop of Onyx. It's a game changer.
How about you? How's stuff?
itching to see it 🙂
doing super awesome, thanks. i’ve been getting my mend on — i’m sure you saw mention the terraform stuff i shared over the weekend
i’ve been rebuilding our infrastructure layer from scratch. redoing all the builds, run scripts, environment vars, etc etc. truly cathartic
It feels like giving the car a wash when you do that. I agree, cathartic is the word 🙂
PR for info model => README stuff is up: https://github.com/colinhicks/onyx-gen-doc/pull/1
@colinhicks This is incredibly good work, wow.
One sec, phone
what does this do?
Thanks! It was fun. The commit history is pretty messed up thanks to a botched rebase, but I managed to get github reviewability
-curious-
Allows the plugin readmes to be templatized against their information models ... from this: https://raw.githubusercontent.com/colinhicks/onyx-kafka/671d29eaf5a321a9882188cd3c9d94428d623912/README.template.md ... to this https://github.com/colinhicks/onyx-kafka/blob/671d29eaf5a321a9882188cd3c9d94428d623912/README.debug.md
If anyone wants to review the PR, lmk and I'll add you as a collaborator
that’s fantastic!!
@robert-stuttaford, you can also compare onyx-gen-doc's own README to its template: https://raw.githubusercontent.com/colinhicks/onyx-gen-doc/master/README.template.md for meta-documentation goodness
well done colin. i’m sure this is going to make life a lot easier for the onyx team to manage an ever growing list of projects
thanks - hopefully, indeed!
Looks awesome. Ill do a bigger review tonight and coordinate getting this into the release process for each repo 🙂
Huuuuge thank you. ^^
sounds good. you're welcome 🙂
> I'm itching to show our new product ontop of Onyx. It's a game changer. Release soon so i can include it in my onyx vs spark vs flink comparison 🙂.
@drewverlee It's looking like December will be the time that we get our first few customers on board and put out a public technical preview. All I can say now is that we have made some truly novel advancements in distributed processing. I'm expecting the community to grow substantially after going public with it.