Fork me on GitHub
#onyx
<
2016-12-23
>
hunter00:12:40

I have two topologies that I'm submitting to an onyx cluster. two jobs are created. only one job ever "activates" and shows evidence of existence in the server logs

hunter00:12:34

both topologies work, and if submitted seperately run as expected. both are long running "streaming" topologies that must run simultaneously on the same cluster

hunter00:12:10

the order submitted matters and the first submitted topology always wins

lucasbradstreet00:12:15

@hunter are you using the greedy job scheduler?

hunter00:12:56

i was, i am not anymore

hunter00:12:02

but i have not seen any change in behavior

hunter00:12:30

i now have peer config/job secheduler set to greedy in both my server and my submitting peer config

hunter00:12:51

i'm submitting topologies via the api through zookeeper

lucasbradstreet00:12:02

greedy will allocate all peers to the first job until it completes

lucasbradstreet00:12:12

you will need a new :onyx/id

hunter00:12:42

THAT's the problem

hunter00:12:00

i didn't change the tenancy id

lucasbradstreet00:12:37

Ah, we should check that the stored parameters are the same when they already exist

lucasbradstreet00:12:50

and throw an error if not

hunter01:12:26

@lucasbradstreet it looks like that is working, thank you very much. i have learned this lesson before about tenancy-ids, but it always slips my mind. hopefully this has left an indelible impression

lucasbradstreet01:12:49

I just committed a change to tell you when your peers are joining with an incompatible scheduler to the one that's set

jasonbell09:12:21

Has anyone come across this exception in the amazon-s3 plugin

java.lang.ClassCastException: java.lang.String cannot be cast to [B
  clojure.lang.ExceptionInfo: Caught exception inside task lifecycle. Rebooting the task. -> Exception type: java.lang.ClassCastException. Exception message: java.lang.String cannot be cast to [B
       job-id: #uuid "f28fc53e-2a79-40c2-a312-f347955569cd"
     metadata: {:job-id #uuid "f28fc53e-2a79-40c2-a312-f347955569cd", :job-hash "7374369c995db0a4fb23e21d14817d5f697a92f5621adecea68c275575bf82"}
      peer-id: #uuid "084f9135-9d52-4053-bd85-a2108f688d93"
    task-name: :out
I’m using the default :s3/serializer :clojure.core/pr-str and it always seems to get upset about converting to what I’m assuming is a byte array.

jasonbell13:12:41

@michaeldrogalis

That message is typically emitted when a peer is beginning a task but can’t make initial connections, so it’s retrying.
Is it possible to separate the error messages out to indicate that this is happening?

lucasbradstreet18:12:12

@jasonbell: that is the wrong serializer for us to be defaulting to, as it should be converting to bytes

jasonbell18:12:55

I see, I just ripped it out of the test example. Thanks @lucasbradstreet

stephenmhopper19:12:29

I have an Onyx job that writes to Kafka. I'm looking to write full end to end tests for it. Do we have a project somewhere that demonstrates writing tests for jobs which output to kafka? I've been looking at the tests in the onyx-kafka plugin as well as the test-utilities described in the project's readme, but I'm noticing some discrepancies

stephenmhopper19:12:16

It looks like there's some nice stuff here: https://github.com/onyx-platform/onyx-kafka/blob/0.9.x/test/onyx/plugin/test_utils.clj but that's a test namespace so I don't believe it's available to me when I pull in onyx-kafka

michaeldrogalis22:12:13

@jasonbell Hi. I’m on vacation, I’ll be back to help after the 27th.