This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-10-23
Channels
- # aws-lambda (1)
- # bangalore-clj (3)
- # beginners (80)
- # boot (8)
- # clojars (1)
- # clojure (200)
- # clojure-dev (37)
- # clojure-greece (26)
- # clojure-italy (11)
- # clojure-norway (3)
- # clojure-russia (14)
- # clojure-spec (21)
- # clojure-uk (30)
- # clojurescript (50)
- # core-logic (10)
- # core-matrix (1)
- # cursive (15)
- # data-science (21)
- # datomic (45)
- # devcards (2)
- # emacs (4)
- # fulcro (12)
- # garden (2)
- # jobs (5)
- # juxt (1)
- # lambdaisland (1)
- # leiningen (4)
- # luminus (20)
- # lumo (26)
- # off-topic (33)
- # onyx (27)
- # parinfer (1)
- # pedestal (3)
- # perun (5)
- # re-frame (20)
- # reagent (27)
- # ring (1)
- # ring-swagger (21)
- # shadow-cljs (259)
- # spacemacs (14)
- # yada (3)
I think i found a bug in onyx-kafka: https://github.com/onyx-platform/onyx-kafka/pull/45
Oof. That’s a bad one.
Thank you. I don’t know how that one snuck in.
Merged. Will cut a release shortly.
Cool, I came across it when it wanted to a support for setting timestamps: https://github.com/onyx-platform/onyx-kafka/pull/46
Thanks for that one too. I commented on the PR - it’d be good to add to the test for that one.
Actually, I don’t mind adding that myself.
I think you may need to be a little careful when using this feature, as I think there’s a window of “acceptability” for kafka timestamps, and if you get into replay situations you could get stuck in a restart cycle as we have no way of ignoring messages that fail in this plugin yet.
Thanks
Yeah, I haven’t really used them in that way yet either.
I’m just judging it off of the docs:
The proposed change will implement the following behaviors.
Allow user to stamp the message when produce
When a leader broker receives a message
If message.timestamp.type=LogAppendTime, the server will override the timestamp with its current local time and append the message to the log.
If the message is a compressed message. the timestamp in the wrapper message will be updated to current server time. Broker will set the timestamp type bit in wrapper messages to 1. Broker will ignore the inner message timestamp. We do this instead of writing current server time to each message is to avoid recompression penalty when people are using LogAppendTime.
If the message is a non-compressed message, the timestamp in the message will be overwritten to current server time.
If message.timestamp.type=CreateTime
If the time difference is within a configurable threshold , the server will accept it and append it to the log. For compressed message, server will update the timestamp in compressed message to the largest timestamp of the inner messages.
If the time difference is beyond the configured threshold , the server will reject the entire batch with TimestampExceededThresholdException.
@chrisblom looks good. I’m happy to merge it if you’re done
Thanks!
@lmergen It's been forever since I've worked on onyx-sql. You mentioned write-row-calls
is defunct. What took its place?
@lmergen I also noticed that we don’t do any cleanup for the jdbc connection. We probably need to close it in the plugin stop
Fyi, we have made more progress on Google cloud storage. We have successfully ran a job using it in kubernetes. Our next step is to put it under some load with metrics to see if we have any issues
@camechis Great. Good progress 🙂
@chrisblom 0.11.1.1 is releasing through our CI now. It’ll be out soon.
Everyone: onyx-kafka 0.11.1.1 is out with an important partition selection fix to the output plugin, plus support for arbitrary output timestamps https://github.com/onyx-platform/onyx-kafka/blob/0.11.x/CHANGES.MD#01101. Thanks @chrisblom!