This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # beginners (45)
- # boot (33)
- # chestnut (9)
- # cider (2)
- # cljs-dev (24)
- # cljsrn (1)
- # clojure (114)
- # clojure-conj (3)
- # clojure-dev (3)
- # clojure-dusseldorf (3)
- # clojure-greece (5)
- # clojure-italy (22)
- # clojure-russia (10)
- # clojure-spec (12)
- # clojure-uk (19)
- # clojurescript (117)
- # core-async (16)
- # cursive (3)
- # data-science (1)
- # datomic (5)
- # docs (13)
- # emacs (1)
- # fulcro (13)
- # graphql (1)
- # hoplon (20)
- # immutant (3)
- # jobs (1)
- # juxt (12)
- # lein-figwheel (1)
- # luminus (4)
- # off-topic (12)
- # onyx (61)
- # portkey (1)
- # re-frame (21)
- # reagent (26)
- # ring-swagger (38)
- # rum (1)
- # shadow-cljs (222)
- # slack-help (4)
- # spacemacs (11)
- # specter (67)
- # uncomplicate (236)
Good to know. Never heard of minio but I'll definitely check it out. At least for now, the ZK seems to meet the need, aside from the memory growth. If the minio doesn't work out, I might think about GC'ing for checkpointed storage. If it comes to that, I'll be needing some hints about where to begin... 😉
@brianh sure thing. Please let us know what you end up doing so we have another data point.
Hey all. How are you? I asked a question here about onyx 0.11 and kafka 0.9. I have pulled out part of the code of the onyx-kafka plugin and made a version which seems to be compatible with both kafka 0.9 and kafka 0.11. I have downgraded the org.apache version to 0.9.0.1 and made some minor changes to some of the code in the kafka-helper function. I can imagine that other solutions may work as well. But, I was wondering whether it would be a possibility to create a PR for this to the onyx-kafka plugin repo, probably in another branch. I understand if it is not a good idea. But it would help me in the sense that I would not need to fork it and so on, and maybe other people would benefit from this. Please let me know.
@eelke Can you expand on how you made 0.9 and 0.11 work in one shot? Those versions have incompatible APIs.
The result is that I have two jobs running with the same code. The one reading from kafka 0.9 and another from 0.11
Using a different version of Kafka client and server has caused some problems that we've seen people come here with, so I don't think it would be a good idea to pin the dependency on 0.9 in the hopes it will also work for 0.11.
We probably need a dedicated kafka-0.9 repo as we have a kafka-0.8 repo if folks are still on 0.9
Thankfully confluent has done a lot of work recently that will improve compatibility going forward.
If flow-conditions are specified, should the corresponding edges also be in the workflow? For the case of
:flow/thrown-exception? true, at least, it seems to including them in the workflow causes segments to be flowed through those edges all the time, not just for exceptions. I haven't checked for other flow conditions.
@dave.dixon that’s something we want to change. It’s not ideal. Currently you need to do some kind of dispatch to filter out the non-exception segments.
It was intended to allow you to flow exceptions down to nodes, which it does, but it turns out that ultimately nearly everyone wants /only/ those exception segments to flow there.
@lucasbradstreet guess that makes sense since it kind of resembles the flow of try/catch
Yeah, unfortunately it becomes pretty painful to make it work how most people want, as you need to add other flow conditions
Another one that comes up is when using trigger/emit. Most people only want to emit the segments emitted by the trigger, not the segments that flow through the window
@lucasbradstreet So is it okay to just leave the error flow edges out of the workflow? Seems to work as desired.
@dave.dixon you’d have to show me what you mean. Do you mean detached DAGs? Or do you mean restricting it by flow conditions?
We're going to need to tweak the way flow conditions work. We've been disgruntled with it for a while too
I mean just leaving the edges out of the
:workflow. If I have a flow condition for thrown exceptions that goes from
[:do-stuff :error] to
:workflow causes everything to be sent, be leaving that edge out and only having the flow condition "works", though my guess is perhaps this really a bug and shouldn't be relied upon.
@michaeldrogalis Yeah, I've been putting together a DSL to allow us to compactly define the graphs. Flow conditions are by far the most "interesting" case.
I was too. I've only tested with the local runtime, not sure if that has anything to do with it. Anyway, I'll add some explicit filtering and not rely on this behavior.
Thanks for the report. I’ll let you know when we have something better for you.
Onyx 0.11.1 is out with native, job level, watermark support. This allows you to only trigger windows once you’re sure all input sources are past a certain point in time. https://github.com/onyx-platform/onyx/blob/0.11.x/changes.md#0111
👏 All @lucasbradstreet on this one. This feature is really important for windowing.
Best solutions for error logging? We'd like to A: trigger error with monitoring lib (ex: datadog), B: write log message, C: write to error kafka topic. We see three paths, either 1. put that logic into each task, 2. configure each task to allow an error segment to flow through it and handle it in our output task, 3. programatically connect an error task to all our other tasks and add flow conditions to all of them allowing them to output an error segment. What are others doing?
Oh, I get it. Are you saying you want all of A, B, and C, and want to accomplish that with paths, 1, 2, or 3?
We pass the Onyx job through a series of post-compilers, which all take and produce an Onyx job. One of those is to add exception handlers.
@jholmberg I recognize you from your GitHub icon -- you were around in the very early days, right?
I'm an old school Ghostbusters fan (rented it on VHS when it came out) so been around a little while. Now that efforts on our side are picking up steam with Onyx, been a lot of fun getting more involved.
So once we deploy this is there anything we should look out for ? Meaning issues we may see?
Probably checkpoint latency would be the number one thing. @lucasbradstreet thoughts?
Trying to get a feel for things. Is there a normal size for these checkpoints? Or are they variable?