This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-04-03
Channels
- # arachne (31)
- # aws (9)
- # bangalore-clj (7)
- # beginners (46)
- # boot (18)
- # cider (21)
- # cljs-dev (8)
- # clojure (154)
- # clojure-dusseldorf (5)
- # clojure-filipino (3)
- # clojure-ireland (4)
- # clojure-italy (9)
- # clojure-russia (6)
- # clojure-spec (6)
- # clojure-uk (52)
- # clojureremote (3)
- # clojurescript (173)
- # clojurewest (14)
- # cursive (24)
- # data-science (2)
- # datomic (18)
- # defnpodcast (1)
- # devcards (1)
- # hoplon (4)
- # instaparse (29)
- # jobs (2)
- # juxt (1)
- # leiningen (3)
- # lumo (78)
- # off-topic (46)
- # om (9)
- # onyx (42)
- # pedestal (33)
- # perun (3)
- # re-frame (9)
- # reagent (6)
- # slack-help (5)
- # spacemacs (2)
- # specter (6)
- # unrepl (157)
- # untangled (99)
- # yada (32)
@lmergen That would do it. onyx-kafka has been tested up to Kafka 0.10.1.0
hmm... does an instance of a window (session based) have a unique id?
hmmm.... maybe I'm just using the wrong trigger.....
yeah, I think what I'm really after is a watermark trigger
@theblackbox hmm what exactly are you after?
simply count sessions
I think if I use a watermark trigger this should fire whenever a new session is created by a segment that exceeds bounds... correct?
then I simply need to touch my profile in db with an $inc
@michaeldrogalis it appears that in 10.beta-10 i'm getting "zookeeper corruptions" for a specific job on a tenancy. The onyx-peer can't see the job on the tenancy-id and i get the following error in onyx-dashboard when it sees the "corrupted job" https://gist.github.com/hhutch/bff5c90fb189b71be25caea4a18fb07b
@hunter Okay, thanks. Will try to get this one patched up tonight.
Config looks alright to me. Even if it wasn’t, it should result in an error on the config about an invalid format/spec. Assertions blowing up are never intentional for user-facing code
what's weird to me is that on the onyx peer it acts as if there are no registered jobs on the tenancy-id ... and i get no log files that indicate the issue on the onyx-peer ...
I can expedite looking at it if you want to get on a support contract - busy till end of day otherwise. Sorry. 😕
it doesn't look like the last few onyx-datomic betas have made it to clojars https://clojars.org/org.onyxplatform/onyx-datomic/versions
@jeremy we had some problems with our build system, however I have released beta10 manually twice now so that is very strange! I'll look into it now
Beat me to it. I just manually released beta10
beta10 is up. Something must be going on with your manual Clojars release settings @lucasbradstreet
Looks good now. I don't know what has happened with mine. Very strange
Np, that has been an annoyance for a few weeks.
@hunter are you using a brand new tenancy id after you upgraded to 0.10? I assume the dashboard and the peers are on the same version?
@lucasbradstreet yes the scenario here is that i had new tenancy, i have a topology running for several hours with fairly constant thoroughput ... then the topology "crashes" ...
That's a strange one indeed
That's a strange one indeed
this happened earlier today, but in that case the compute node (google compute) had crashed and had to be rebooted
I'm on a train at the moment, but once I'm back home I'll get you a dashboard version that can get us more debugging information so I can fix it
I see that the node left in that log entry but that assertion still shouldn't be hit
Hi guys! Any tip on that exception? Im using datomic read-log plugin.
clojure.lang.ExceptionInfo: Unfreezable type: class clojure.lang.Delay
as-str: "#object[clojure.lang.Delay 0x411f2503 {:status :pending, :val nil}]"
type: clojure.lang.Delay
clojure.lang.ExceptionInfo: Handling uncaught exception thrown inside task lifecycle - killing this job. -> Exception type: clojure.lang.ExceptionInfo. Exception message: Unfreezable type: class clojure.lang.Delay
as-str: "#object[clojure.lang.Delay 0x411f2503 {:status :pending, :val nil}]"
job-id: #uuid "45a0ffcc-14db-4a82-a69d-ad88882cdf65"
metadata: {:job-id #uuid "45a0ffcc-14db-4a82-a69d-ad88882cdf65", :job-hash "83d0b96bbb7be94ef47cb4844b41196e661e25838f11ccca269a06d831fc3"}
peer-id: #uuid "bb43dc91-899b-4eba-9c5d-374db14f9d6e"
task-name: :read-log
type: clojure.lang.Delay
This one again. 😕 As far as we know, this one is a bug in Datomic. We asked their support about how a Delay can be turned from their API and they kinda scratched their heads.
@robert-stuttaford I know this one came up once with you. I presume you figured out a workaround?
I don’t believe there was ever a work around established. Some theorized it was is because tx-range isn’t lazy, but there was a bit of disagreement about whether that is even true.
I’m not sure why delays leak out sometimes, but it’s not us generating them. I could do a check when I poll. It really shouldn’t be necessary though.
@lellis by the way, what version of onyx-datomic are you using?