Fork me on GitHub

Out of curiosity, it seems like onyx-kafka hardcodes the serializer to use the byte-array-serializer instead of your passed in serializer. It will still serialize using your passed in serialize fn but that value then gets serialized by the byte array serializer in Kafka. Is there a reason for this?


@kenny it was mostly a convenience so you can supply your own byte serializer/deserializers as clojure fns without using the preferred Kafka style serializers. I’m open to a PR that loosens this up if you want to use the standard Kafka mechanisms.


I see. I don't have a particular need for this. Just trying to understand some things. Do most people serialize with Nippy?


Do users simply not have a need to replace the "lower level" Kafka serializer?


e.g. This two step serialization process has been good enough.


It hasn’t really come up yet, since it’s about as easy as the kafka approach, since it’s not really deserializing anything - it just gets a bag of bytes. The main downside is that you don’t get to use the kafka schema registry so you can evolve your serialization formats e.g. Avro, but this is only really leads to a performance/size penalty vs something like nippy.


Overall it does need more work for advanced serialiser use.


It would seem there hasn't been a particular pull for this feature so I'm guessing what is already implemented is good enough?


Yeah. I think it depends on how much you generally depend on the extra kafka infrastructure (e.g. schema registry). I can see it being important, but it hasn’t come up much yet.


i've started seeing errors where segments appear to be getting sent to the wrong functions (by flow conditions)... i'm on onyx 0.9.15, and this has suddenly started happening on a production cluster which has been working fine for the last few weeks


has this been encountered before ?


I’ve never really used flow conditions in anger enough, the ones I’ve used in prod worked okay. That was in 0.10.x though.


i've got a fanout of around 15,000 (i.e. one message in 15,000 out) and those 15k which consist of a few different types of message are being directed to different fns with flow conditions according to type... but some of them seem to end up with the wrong fn


I’ve never fanned out to anything at that scale, so I’m probably not the best person to answer.


@mccraigmccraig Fan-out magnitude is unlikely to be related. Nothing at all changed in the last few weeks?


nothing - it's been running the same docker images on the same mesos cluster since we first deployed the release


Can you come up with a reproducer?


And can you test that your flow predicates behave correctly as plain functions?


oops wrong gist


i'm firefighting now - going to remove the flow-conditions and see what happens


removing all the flow-conditions except for the error handling conditions seems to be working for the moment @michaeldrogalis - although it's not a long-term solution - i need the flow conditions to treat different types of effect differently, e.g. to put push notifications onto an output kafka topic rather then dispatch immmediately


assuming my problem were to be an onyx issue (which it may not be) how likely do you think it is that an upgrade to 0.10 will solve it ? i.e. how much around flow-conditions and peer messaging changed btw 0.9 and 0.10 ?


A lot has changed around messaging, only a little bit has changed with flow conditions in 0.11+ (we handle exceptions differently)


Its suspicious that everything is blowing up when nothing apparently changed in your code.