Fork me on GitHub

Good to know. Never heard of minio but I'll definitely check it out. At least for now, the ZK seems to meet the need, aside from the memory growth. If the minio doesn't work out, I might think about GC'ing for checkpointed storage. If it comes to that, I'll be needing some hints about where to begin... 😉


@brianh sure thing. Please let us know what you end up doing so we have another data point.


Hey all. How are you? I asked a question here about onyx 0.11 and kafka 0.9. I have pulled out part of the code of the onyx-kafka plugin and made a version which seems to be compatible with both kafka 0.9 and kafka 0.11. I have downgraded the org.apache version to and made some minor changes to some of the code in the kafka-helper function. I can imagine that other solutions may work as well. But, I was wondering whether it would be a possibility to create a PR for this to the onyx-kafka plugin repo, probably in another branch. I understand if it is not a good idea. But it would help me in the sense that I would not need to fork it and so on, and maybe other people would benefit from this. Please let me know.


@eelke Can you expand on how you made 0.9 and 0.11 work in one shot? Those versions have incompatible APIs.


The result is that I have two jobs running with the same code. The one reading from kafka 0.9 and another from 0.11


Using a different version of Kafka client and server has caused some problems that we've seen people come here with, so I don't think it would be a good idea to pin the dependency on 0.9 in the hopes it will also work for 0.11.


We probably need a dedicated kafka-0.9 repo as we have a kafka-0.8 repo if folks are still on 0.9


Sure sounds reasonable. I was surprised it works in this case though


Me too. More likely the exception than the rule 🙂


Thankfully confluent has done a lot of work recently that will improve compatibility going forward.


If flow-conditions are specified, should the corresponding edges also be in the workflow? For the case of :flow/thrown-exception? true, at least, it seems to including them in the workflow causes segments to be flowed through those edges all the time, not just for exceptions. I haven't checked for other flow conditions.


@dave.dixon that’s something we want to change. It’s not ideal. Currently you need to do some kind of dispatch to filter out the non-exception segments.


It was intended to allow you to flow exceptions down to nodes, which it does, but it turns out that ultimately nearly everyone wants /only/ those exception segments to flow there.


@lucasbradstreet guess that makes sense since it kind of resembles the flow of try/catch


Yeah, unfortunately it becomes pretty painful to make it work how most people want, as you need to add other flow conditions


Another one that comes up is when using trigger/emit. Most people only want to emit the segments emitted by the trigger, not the segments that flow through the window


@lucasbradstreet So is it okay to just leave the error flow edges out of the workflow? Seems to work as desired.


@dave.dixon you’d have to show me what you mean. Do you mean detached DAGs? Or do you mean restricting it by flow conditions?


We're going to need to tweak the way flow conditions work. We've been disgruntled with it for a while too


I mean just leaving the edges out of the :workflow. If I have a flow condition for thrown exceptions that goes from :do-stuff to :error, adding [:do-stuff :error] to :workflow causes everything to be sent, be leaving that edge out and only having the flow condition "works", though my guess is perhaps this really a bug and shouldn't be relied upon.


@michaeldrogalis Yeah, I've been putting together a DSL to allow us to compactly define the graphs. Flow conditions are by far the most "interesting" case.


@dave.dixon That's fantastic, there's been chatter about that for years.


@dave.dixon that sounds like a bug.


I’m incredibly surprised that it works


I’m surprised that it also passes our validation


@michaeldrogalis and I are going to figure out how to fix this case tonight.


I was too. I've only tested with the local runtime, not sure if that has anything to do with it. Anyway, I'll add some explicit filtering and not rely on this behavior.


Ahh. That makes a lot more sense…


Thanks for the report. I’ll let you know when we have something better for you.


Great, thanks.


Onyx 0.11.1 is out with native, job level, watermark support. This allows you to only trigger windows once you’re sure all input sources are past a certain point in time.


👏 All @lucasbradstreet on this one. This feature is really important for windowing.


quick stupid question: How do you build the onyx docs from adoc ?


@lxsameer asciidoctor docs/user-guide/latest/index.adoc


Best solutions for error logging? We'd like to A: trigger error with monitoring lib (ex: datadog), B: write log message, C: write to error kafka topic. We see three paths, either 1. put that logic into each task, 2. configure each task to allow an error segment to flow through it and handle it in our output task, 3. programatically connect an error task to all our other tasks and add flow conditions to all of them allowing them to output an error segment. What are others doing?


We recommend (3). Or.. (A) with lifecycles.


Oh, I get it. Are you saying you want all of A, B, and C, and want to accomplish that with paths, 1, 2, or 3?


Gotcha. We use (3) in production and it works great.


We pass the Onyx job through a series of post-compilers, which all take and produce an Onyx job. One of those is to add exception handlers.


Makes sense. Are any of those post-compilers open source / available?


Not at the moment, no.


that is interesting


@lucasbradstreet, @michaeldrogalis: Just a quick update on Google Cloud Storage support. Just got unit tests passing. By no means would I say we're 100% there but definitely a positive sign.


@jholmberg I recognize you from your GitHub icon -- you were around in the very early days, right?


I'm an old school Ghostbusters fan (rented it on VHS when it came out) so been around a little while. Now that efforts on our side are picking up steam with Onyx, been a lot of fun getting more involved.


Hah, cool. Yeah I remember seeing that icon in Gitter years ago. 🙂


(Wasn't implying anything about your age 😉 )


Anyway, glad to see the implementation moving forward!


I was going to imply, lol. Naw he's not that old


nah, it's alright. I'm an old fart 🙂


So once we deploy this is there anything we should look out for ? Meaning issues we may see?


Probably checkpoint latency would be the number one thing. @lucasbradstreet thoughts?


Trying to get a feel for things. Is there a normal size for these checkpoints? Or are they variable?


If you're using windows, they're dependent on the size of those. Otherwise they're pretty tiny