This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # adventofcode (4)
- # beginners (113)
- # boot (165)
- # cider (192)
- # cljsrn (82)
- # clojure (148)
- # clojure-austin (6)
- # clojure-russia (22)
- # clojure-spec (45)
- # clojure-uk (19)
- # clojurescript (153)
- # core-async (5)
- # cursive (7)
- # datomic (2)
- # defnpodcast (2)
- # emacs (1)
- # hoplon (617)
- # instaparse (10)
- # lein-figwheel (19)
- # luminus (4)
- # off-topic (12)
- # om (3)
- # onyx (36)
- # pedestal (1)
- # protorepl (43)
- # re-frame (8)
- # ring (7)
- # specter (17)
- # testing (2)
- # untangled (117)
- # yada (12)
Yeah, I had forgotten about that issue. @michaeldrogalis the triggers don’t even fire for the segment triggers
Yeah, that's what I am seeing. I switched my window and triggers to elements, and still nothing fires.
So the job completes with all the segments in the outbox, but no triggers ever fire, which seems to make sense given the issue + what you said.
No worries, I just was thinking I did something wrong until I finally started poking around the source and then saw the issue after that.
That’s strange, could have sworn I had all the triggers firing before I put it down.
Ill check it out tonight and see if there was a regression somewhere. Maybe I’m losing my marbles and didn’t have them working to begin with. o-o
On that note, if anyone can figure out a way to make most of Onyx core’s tests run against local-rt, that’d be pretty awesome. That’s been the goal.
I have an Onyx job that continuously reads data from a web socket. What's the best way to safely tell it to shutdown? I have other jobs that read from Kafka and in those cases, I just send the
:done sentinel across the topic. But with this particular job, I'm connecting to a 3rd party site and therefore don't have control of the producer.
@michaeldrogalis Does that allow segments to flush from the system before killing the job?
Okay, I think it should work fine for the job I have that just reads from a websocket. But the jobs that are reading from Kafka are using
onyx-kafka so I'm guessing that if I use
onyx.api/kill-job on a job after it's sent an ack to Kafka but before it finishes processing, that it could potentially result in me skipping messages when I restart if I don't reset?
@stephenmhopper onyx-kafka will only ack messages that have completed their path through Onyx, no messages should be lost.
We can do that because Kafka has acknowledgement built in - can’t do the same for websockets because they lack that feature
@mariusz_jachimowicz Some. Completed and killed jobs are cleared: https://github.com/onyx-platform/onyx/blob/0.9.x/src/onyx/log/commands/gc.clj#L17-L28
I think that people often want to see this historical data - summarize about what jobs was finished, killed, jobs duration. We could store this summarized data in some node in ZK rather than getting from replica. What do you think?
We need to periodically clear away completed jobs to keep the amount of data in ZooKeeper manageable and the memory size for the replica limited for long running Onyx clusters. Could definitely make a tool outside of Onyx core uses the log subscriber or queries on Onyx’s peer health endpoint to get that information, then do whatever with it.