This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-12-23
Channels
- # ai (1)
- # beginners (84)
- # boot (111)
- # cider (2)
- # cljsrn (9)
- # clojure (245)
- # clojure-italy (2)
- # clojure-mke (1)
- # clojure-russia (6)
- # clojure-spec (92)
- # clojure-uk (32)
- # clojurescript (55)
- # core-async (1)
- # cursive (8)
- # datomic (19)
- # events (1)
- # hoplon (379)
- # lambdaisland (4)
- # lein-figwheel (8)
- # off-topic (115)
- # om (18)
- # om-next (5)
- # onyx (25)
- # re-frame (8)
- # reagent (5)
- # ring-swagger (1)
- # rum (19)
- # schema (3)
- # untangled (24)
I have two topologies that I'm submitting to an onyx cluster. two jobs are created. only one job ever "activates" and shows evidence of existence in the server logs
both topologies work, and if submitted seperately run as expected. both are long running "streaming" topologies that must run simultaneously on the same cluster
i now have peer config/job secheduler set to greedy in both my server and my submitting peer config
Ah, we should check that the stored parameters are the same when they already exist
@lucasbradstreet it looks like that is working, thank you very much. i have learned this lesson before about tenancy-ids, but it always slips my mind. hopefully this has left an indelible impression
I just committed a change to tell you when your peers are joining with an incompatible scheduler to the one that's set
Has anyone come across this exception in the amazon-s3 plugin
java.lang.ClassCastException: java.lang.String cannot be cast to [B
clojure.lang.ExceptionInfo: Caught exception inside task lifecycle. Rebooting the task. -> Exception type: java.lang.ClassCastException. Exception message: java.lang.String cannot be cast to [B
job-id: #uuid "f28fc53e-2a79-40c2-a312-f347955569cd"
metadata: {:job-id #uuid "f28fc53e-2a79-40c2-a312-f347955569cd", :job-hash "7374369c995db0a4fb23e21d14817d5f697a92f5621adecea68c275575bf82"}
peer-id: #uuid "084f9135-9d52-4053-bd85-a2108f688d93"
task-name: :out
I’m using the default :s3/serializer :clojure.core/pr-str
and it always seems to get upset about converting to what I’m assuming is a byte array.@michaeldrogalis
That message is typically emitted when a peer is beginning a task but can’t make initial connections, so it’s retrying.
Is it possible to separate the error messages out to indicate that this is happening?@jasonbell: that is the wrong serializer for us to be defaulting to, as it should be converting to bytes
I have an Onyx job that writes to Kafka. I'm looking to write full end to end tests for it. Do we have a project somewhere that demonstrates writing tests for jobs which output to kafka? I've been looking at the tests in the onyx-kafka
plugin as well as the test-utilities described in the project's readme, but I'm noticing some discrepancies
It looks like there's some nice stuff here: https://github.com/onyx-platform/onyx-kafka/blob/0.9.x/test/onyx/plugin/test_utils.clj
but that's a test namespace so I don't believe it's available to me when I pull in onyx-kafka