Fork me on GitHub
#onyx
<
2017-04-03
>
michaeldrogalis02:04:32

@lmergen That would do it. onyx-kafka has been tested up to Kafka 0.10.1.0

theblackbox12:04:54

hmm... does an instance of a window (session based) have a unique id?

theblackbox12:04:37

hmmm.... maybe I'm just using the wrong trigger.....

theblackbox13:04:56

yeah, I think what I'm really after is a watermark trigger

gardnervickers13:04:42

@theblackbox hmm what exactly are you after?

theblackbox13:04:20

simply count sessions

theblackbox13:04:03

I think if I use a watermark trigger this should fire whenever a new session is created by a segment that exceeds bounds... correct?

theblackbox13:04:33

then I simply need to touch my profile in db with an $inc

hunter20:04:10

@michaeldrogalis it appears that in 10.beta-10 i'm getting "zookeeper corruptions" for a specific job on a tenancy. The onyx-peer can't see the job on the tenancy-id and i get the following error in onyx-dashboard when it sees the "corrupted job" https://gist.github.com/hhutch/bff5c90fb189b71be25caea4a18fb07b

michaeldrogalis20:04:19

@hunter Okay, thanks. Will try to get this one patched up tonight.

hunter20:04:48

is it possible i have an incorrect setting in my peer config?

michaeldrogalis20:04:52

Config looks alright to me. Even if it wasn’t, it should result in an error on the config about an invalid format/spec. Assertions blowing up are never intentional for user-facing code

hunter20:04:42

what's weird to me is that on the onyx peer it acts as if there are no registered jobs on the tenancy-id ... and i get no log files that indicate the issue on the onyx-peer ...

hunter20:04:50

but on the onyx-dashboard i get that behavior

hunter20:04:12

it's happened 2x today so far on a production system

michaeldrogalis20:04:46

I can expedite looking at it if you want to get on a support contract - busy till end of day otherwise. Sorry. 😕

hunter20:04:58

i understand, thanks for your help.

jeremy20:04:11

it doesn't look like the last few onyx-datomic betas have made it to clojars https://clojars.org/org.onyxplatform/onyx-datomic/versions

lucasbradstreet21:04:41

@jeremy we had some problems with our build system, however I have released beta10 manually twice now so that is very strange! I'll look into it now

michaeldrogalis21:04:08

Beat me to it. I just manually released beta10

michaeldrogalis21:04:46

beta10 is up. Something must be going on with your manual Clojars release settings @lucasbradstreet

lucasbradstreet21:04:00

Looks good now. I don't know what has happened with mine. Very strange

jeremy21:04:07

heh, thanks!

michaeldrogalis21:04:57

Np, that has been an annoyance for a few weeks.

lucasbradstreet21:04:11

@hunter are you using a brand new tenancy id after you upgraded to 0.10? I assume the dashboard and the peers are on the same version?

hunter21:04:16

@lucasbradstreet yes the scenario here is that i had new tenancy, i have a topology running for several hours with fairly constant thoroughput ... then the topology "crashes" ...

hunter21:04:55

i restart the onyx server and it never sees the job on the tenancy from then on

hunter21:04:16

then i check with onyx-dashboard, running off HEAD of the 0.10 branch

hunter21:04:29

and the rest as stated above

lucasbradstreet21:04:39

That's a strange one indeed

lucasbradstreet21:04:39

That's a strange one indeed

hunter21:04:15

this happened earlier today, but in that case the compute node (google compute) had crashed and had to be rebooted

lucasbradstreet21:04:32

I'm on a train at the moment, but once I'm back home I'll get you a dashboard version that can get us more debugging information so I can fix it

lucasbradstreet21:04:49

I see that the node left in that log entry but that assertion still shouldn't be hit

hunter21:04:20

appreciated, thanks for y'alls help

hunter21:04:40

i'm just keeping an eye on it for now

lellis21:04:46

Hi guys! Any tip on that exception? Im using datomic read-log plugin.

clojure.lang.ExceptionInfo: Unfreezable type: class clojure.lang.Delay
    as-str: "#object[clojure.lang.Delay 0x411f2503 {:status :pending, :val nil}]"
      type: clojure.lang.Delay
clojure.lang.ExceptionInfo: Handling uncaught exception thrown inside task lifecycle - killing this job. -> Exception type: clojure.lang.ExceptionInfo. Exception message: Unfreezable type: class clojure.lang.Delay
       as-str: "#object[clojure.lang.Delay 0x411f2503 {:status :pending, :val nil}]"
       job-id: #uuid "45a0ffcc-14db-4a82-a69d-ad88882cdf65"
     metadata: {:job-id #uuid "45a0ffcc-14db-4a82-a69d-ad88882cdf65", :job-hash "83d0b96bbb7be94ef47cb4844b41196e661e25838f11ccca269a06d831fc3"}
      peer-id: #uuid "bb43dc91-899b-4eba-9c5d-374db14f9d6e"
    task-name: :read-log
         type: clojure.lang.Delay

michaeldrogalis21:04:30

This one again. 😕 As far as we know, this one is a bug in Datomic. We asked their support about how a Delay can be turned from their API and they kinda scratched their heads.

michaeldrogalis21:04:55

@robert-stuttaford I know this one came up once with you. I presume you figured out a workaround?

lucasbradstreet23:04:16

I don’t believe there was ever a work around established. Some theorized it was is because tx-range isn’t lazy, but there was a bit of disagreement about whether that is even true.

lucasbradstreet23:04:37

I’m not sure why delays leak out sometimes, but it’s not us generating them. I could do a check when I poll. It really shouldn’t be necessary though.

lucasbradstreet23:04:21

@lellis by the way, what version of onyx-datomic are you using?