Fork me on GitHub
#onyx
<
2017-10-23
>
lucasbradstreet18:10:02

Oof. That’s a bad one.

lucasbradstreet18:10:42

Thank you. I don’t know how that one snuck in.

lucasbradstreet18:10:22

Merged. Will cut a release shortly.

chrisblom18:10:25

Cool, I came across it when it wanted to a support for setting timestamps: https://github.com/onyx-platform/onyx-kafka/pull/46

lucasbradstreet18:10:52

Thanks for that one too. I commented on the PR - it’d be good to add to the test for that one.

lucasbradstreet18:10:13

Actually, I don’t mind adding that myself.

lucasbradstreet18:10:51

I think you may need to be a little careful when using this feature, as I think there’s a window of “acceptability” for kafka timestamps, and if you get into replay situations you could get stuck in a restart cycle as we have no way of ignoring messages that fail in this plugin yet.

chrisblom18:10:56

good to know, I haven't really used them yet

chrisblom18:10:09

i'm already working on the test btw

lucasbradstreet18:10:16

Yeah, I haven’t really used them in that way yet either.

lucasbradstreet18:10:40

I’m just judging it off of the docs:

lucasbradstreet18:10:41

The proposed change will implement the following behaviors.
Allow user to stamp the message when produce
When a leader broker receives a message
If message.timestamp.type=LogAppendTime, the server will override the timestamp with its current local time and append the message to the log.
If the message is a compressed message. the timestamp in the wrapper message will be updated to current server time. Broker will set the timestamp type bit in wrapper messages to 1. Broker will ignore the inner message timestamp. We do this instead of writing current server time to each message is to avoid recompression penalty when people are using LogAppendTime.
If the message is a non-compressed message, the timestamp in the message will be overwritten to current server time.
If message.timestamp.type=CreateTime
If the time difference is within a configurable threshold , the server will accept it and append it to the log. For compressed message, server will update the timestamp in compressed message to the largest timestamp of the inner messages.
If the time difference is beyond the configured threshold , the server will reject the entire batch with TimestampExceededThresholdException.

lucasbradstreet19:10:14

@chrisblom looks good. I’m happy to merge it if you’re done

chrisblom19:10:58

hmm, some tests are failing on my machine, but I had only 2% battery

chrisblom19:10:24

ah, they are ok now

chrisblom19:10:29

ok, i'm done

michaeldrogalis21:10:48

@lmergen It's been forever since I've worked on onyx-sql. You mentioned write-row-calls is defunct. What took its place?

lucasbradstreet21:10:16

@lmergen I also noticed that we don’t do any cleanup for the jdbc connection. We probably need to close it in the plugin stop

Travis22:10:10

Fyi, we have made more progress on Google cloud storage. We have successfully ran a job using it in kubernetes. Our next step is to put it under some load with metrics to see if we have any issues

lucasbradstreet22:10:12

@camechis Great. Good progress 🙂

lucasbradstreet22:10:08

@chrisblom 0.11.1.1 is releasing through our CI now. It’ll be out soon.

lucasbradstreet22:10:23

Everyone: onyx-kafka 0.11.1.1 is out with an important partition selection fix to the output plugin, plus support for arbitrary output timestamps https://github.com/onyx-platform/onyx-kafka/blob/0.11.x/CHANGES.MD#01101. Thanks @chrisblom!