This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-01-21
Channels
- # aatree (88)
- # admin-announcements (14)
- # alda (26)
- # announcements (4)
- # avi (6)
- # aws (7)
- # beginners (80)
- # boot (268)
- # braid-chat (58)
- # cider (4)
- # clara (54)
- # cljs-dev (16)
- # cljsrn (27)
- # clojars (13)
- # clojure (123)
- # clojure-chicago (2)
- # clojure-czech (8)
- # clojure-france (5)
- # clojure-hamburg (2)
- # clojure-miami (6)
- # clojure-nl (5)
- # clojure-russia (285)
- # clojure-spain (2)
- # clojurebridge (3)
- # clojurescript (137)
- # code-reviews (14)
- # community-development (6)
- # core-async (8)
- # core-matrix (10)
- # cursive (2)
- # datascript (1)
- # datomic (24)
- # dirac (2)
- # emacs (5)
- # hoplon (4)
- # incanter (6)
- # jobs (7)
- # ldnclj (42)
- # ldnproclodo (2)
- # leiningen (1)
- # mount (60)
- # off-topic (15)
- # om (134)
- # onyx (65)
- # perun (4)
- # portland-or (2)
- # proton (15)
- # random (1)
- # re-frame (24)
- # reagent (7)
- # testing (4)
- # yada (9)
hey @lucasbradstreet i see that you’re planning to deprecate :onyx/restart-pred-fn
, which we’re using to keep the job going when a task throws. what should we do instead; use :lifecycle/handle-exception
? if so, can you guide me to documentation on it, please, as the cheat-sheet doesn’t yet cover this key
Yes, lifecycle/handle-exception is the way forward. However, it has some issues that need resolving before we deprecate restart-pred-fn, so you might be safe for a little bit
ok. i’ll keep an eye on the changelog. busy upgrading to 0.8.4 now
We’ll let you know when we’re closer to deprecating it, including documenting what you need to do in changes.md
100%, thank you
We need to get a new cheat sheet up. Need some automation in the release process there ;)
-grin- you guys do a great job of automating things
hey so, the two new people you added to the team, what are they helping with?
Whatever they feel like. @gardnervickers has been revamping the template, which we should be launching soon. The new one uses a lot of the best practices that we’ve picked up along the way. It’s going to be awesome.
Hey, the last couple of hours I’ve been playing around with windows and triggers. I have configured a simple fixed window that’s counting segments, and a trigger that logs the value every 5 seconds. It looks like I’m getting 2 triggers fired every 5 seconds though, each with different aggregate values.
For example, this is 20 seconds of output
{"gb" 131}
{"gb" 192}
{"gb" 131}
{"gb" 193}
{"gb" 132}
{"gb" 196}
{"gb" 136}
{"gb" 199}
Ah yes I was about to ask if you're grouping.
When you use grouping there is a window for each grouping key. I wasn't aware that a trigger would be fired for each. I'd have to check that
I'm about to have dinner but I'll give it a look afterwards
I have an upstream flow condition that’s filtering territory=`gb`, so if that’s right there should only be 1 window: for gb?
Yeah that's right
@lsnape: is it possible you have two peers allocated to that task, and thus two timers?
initially i had min-peers + max-peers set to 2, but it made me suspicious so I switched it back to 1. Alas, the behaviour is the same
here is the task:
{:onyx/name :seeds-by-territory
:onyx/type :function
:onyx/fn :clojure.core/identity
:onyx/batch-size 20
:onyx/group-by-key :territory
:onyx/min-peers 1
:onyx/max-peers 1
:onyx/flux-policy :continue}
window:
{:window/id :territory-seed-count
:window/task :seeds-by-territory
:window/type :fixed
:window/aggregation :onyx.windowing.aggregation/count
:window/window-key :receivetime
:window/range [1 :hours]}
and trigger:
{:trigger/window-id :territory-seed-count
:trigger/refinement :accumulating
:trigger/on :timer
:trigger/period [10 :seconds]
:trigger/sync ::update-territory-counts}
I think I’m seeing it fire multiple times too
so I can confirm that I get multiple fires when I eval the task in the repl while the job is running
Yeah, it’s looking more like a bug on my end too
Aha. I think it may be setting up a trigger on all tasks, even those that aren’t relevant to the trigger.
I’ll figure it out from here
Thanks for the report. Always good to report something that looks out of order
At least from what I’m seeing
Oh wait
I set min-peers, not max-peers
cool. Glad to assist. I’ll be working on this stuff all today (GMT) so let me know if you want me to reproduce anything
Cool. I’ll be in touch. I’ll try to reproduce it - I think I was on the wrong track
What do you mean “eval the task in the repl while the job is running”?
so I’m printing to stdout in the task function. I noticed that when I evaluated that function e.g. to change what’s printed, I was getting duplicate messages
you mean the sync fn?
Hah, I have no idea why that would cause multiple calls
I’ll try again. Maybe duplicate fires started happening when I eval’d the sync-fn, by chance!
Interesting anyway. I’m not able to reproduce it once I reduced n-peers to 1
I don’t know if this is relevant, but I’m consuming from a kafka topic with 8 partitions. My input task has 8 min/max peers
It should only matter how many peers the task with the trigger has
yeah, so it looks like sometimes the triggers fire once and then soon after duplicates appear
OK, so I think what’s happening
is you’re using a fixed window of a certain size
and there are two windows, and the trigger is firing for each
Sorry, I think that was the expected behaviour
I think I mistook a fixed window for a global one
hmm, but the window range is 1 hour. I wouldn’t expect 2 windows when the trigger fires every 10 secs?
Right, but do the values for the segments on key :receivetime
fall within two 1 hour windows?
Basically the trigger will fire every 10s, and then iterate over the different windows that the segments have been bucketed into, and call the sync-fn for each of these windows
That makes sense. I made the assumption that they were in order. Here’s a sample:
"2016-01-16T14:19:38.699Z"
"2016-01-16T14:19:40.076Z"
"2016-01-16T14:00:00.592Z"
"2016-01-16T13:58:32.753Z"
"2016-01-16T13:53:02.021Z"
"2016-01-16T14:29:00.552Z”
you get some window metadata passed in as the second last argument, including the window-id, lower-bound, upper-bound and the firing context
You’re welcome