This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (17)
- # boot (19)
- # chestnut (1)
- # cider (25)
- # clara (1)
- # cljs-dev (15)
- # cljsrn (10)
- # clojars (9)
- # clojure (182)
- # clojure-brasil (27)
- # clojure-dusseldorf (2)
- # clojure-gamedev (5)
- # clojure-germany (1)
- # clojure-greece (2)
- # clojure-italy (18)
- # clojure-poland (5)
- # clojure-romania (3)
- # clojure-russia (29)
- # clojure-serbia (6)
- # clojure-spec (9)
- # clojure-uk (77)
- # clojure-ukraine (1)
- # clojurescript (61)
- # cursive (5)
- # datomic (20)
- # defnpodcast (1)
- # emacs (10)
- # fulcro (2)
- # graphql (2)
- # hoplon (11)
- # lumo (4)
- # off-topic (50)
- # om (3)
- # onyx (26)
- # other-languages (39)
- # parinfer (2)
- # pedestal (5)
- # re-frame (32)
- # reagent (48)
- # rum (7)
- # shadow-cljs (10)
- # spacemacs (29)
- # sql (10)
- # unrepl (58)
- # vim (3)
Hey, I just placed an issue on onyx-kafka, for troubleshooting simple job not writing to a topic. https://github.com/onyx-platform/onyx-kafka/issues/47
Getting an error that looks like it's stemming from onyx itself: integer overflow
Anyone seen similar?
:conform-health-check-msg threw the exception. Do you have any lifecycles setup? (http://www.onyxplatform.org/docs/user-guide/0.12.x/#_example) if you don’t the task will die and not restart, that will render the job broken and everything will need restarting.
Though you are still prudent to see what’s happening in the task itself to catch the exception in the first place obviously.
I’ll pop in with some additional info later
@jasonbell So I see the task name, but since there was nothing in the stack trace and the code is this:
We don't have any lifecycles set up for this task, and it didn't look like we were modifying any integers, which is why I thought it might be in the internals of onyx.
(defn conform-health-check-message [segment] (let [result (:result segment) ts (if-let [ts (:timestamp segment)] ts ; expect this to be in msecs (.getMillis (time/now))) output (-> segment (select-keys [:hash :config :lambda :commit]) (assoc :timestamp ts :result (keyword result)))] (debug "Conformed: " output) output))
It definitely is onyx internals. More when I’m done with a call.
Sure, no rush. job successfully restarted from resume-point it seems. Thanks
@eriktjacobsen As long as you’re sorted. Previously I’ve wrapped each task with lifecycle events just in case and then handle all the exception so the job doesn’t have a chance to fail.
Alright, so the problem there is that it took a really long time to write out a batch to the task downstream of your task, and we overflowed a long in terms of how many nanoseconds it took.
The second problem, as @jasonbell accurately described, is that you don’t have a handle-exception lifecycle on your tasks as a failsafe for whether to continue running the job.
So, I would think the actions for us are to fix the overflow. Your actions are to figure out why it might have taken so long for that task to write the batch, as well as add the exception lifecycle.
My bad for assuming we would never overflow that long 😄
ah. Looking through logs, it seems there were some ZK timeouts happening around that time.
Yeah, I’m guessing you got blocked downstream, and so upstream was trying to offer the segments to it and got stuck.
@jasonbell hah, I have a helper just like that. Actually in this case it’s already a long, but nanoseconds are kinda big to start with 😮, so we overflowed the long anyway.
I’m actually not sure how that overflowed, as it would have had to be a lot of hours (many many thousands)
I’ll have to figure it out anyway.
Ahhh, it’s not resetting the accumulated time when you’re processing batches of 0 size, so if you have a long running job that isn’t receiving any segments it’ll continue to accumulate. How long was that job running for approximately?
The weekend, since Friday. looks like timeouts were happening here and there, but started majorly ramping up about an hour before the exception which is ultimately what stopped the job.
OK, the overflow still doesn’t completely make sense to me then.
Anyway, I’ll put in some code to prevent the overflow, and with the lifecycle addition the job would have recovered.