This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-05-18
Channels
- # admin-announcements (7)
- # arachne (24)
- # beginners (40)
- # boot (24)
- # braid-chat (22)
- # cider (8)
- # cljsrn (35)
- # clojure (32)
- # clojure-austin (1)
- # clojure-belgium (52)
- # clojure-russia (16)
- # clojure-sanfrancisco (1)
- # clojure-taiwan (2)
- # clojure-uk (25)
- # clojurescript (112)
- # core-async (3)
- # cursive (18)
- # data-science (1)
- # datascript (7)
- # datomic (30)
- # devcards (2)
- # dirac (12)
- # emacs (4)
- # flambo (1)
- # funcool (5)
- # hoplon (146)
- # jobs (9)
- # jvm (5)
- # off-topic (4)
- # om (141)
- # onyx (22)
- # re-frame (89)
- # reagent (86)
- # ring-swagger (31)
- # rum (3)
- # spacemacs (1)
- # specter (10)
- # untangled (112)
- # yada (3)
That’s great, thanks
I’m trying to read from Kafka from a local cluster. I’m not expecting it would work right away, but I don’t see any error messages either
I see this for instance Peer chose not to start the task yet. Backing off and retrying
. Does this “decision” indicate an error?
It also seems the Peers aren’t available anymore for others Jobs either. I’ve killed the Kafka job and i’m trying to run one that is normally working, but the peers seem to be preoccupied (`Not enough virtual peers have warmed up to start the task yet, backing off and trying again…`)
@jeroenvandijk: Usually that’s indicative that all the peers haven’t finished starting up. Perhaps it’s trying to connect to ZK/Kafka and the peer reading from it is waiting to time out? That might explain why killing the job isn’t working, because it’s still timing out?
The way it works is that all the peers perform their initialization logic and then emit a signal that the peer is ready. When enough peers have started so that the job is covered, it will start. So the likely cause is that a critical task has a peer that is hanging on a start task lifecycle
New 64 CPU, 2TB RAM instances on EC2 https://aws.amazon.com/blogs/aws/x1-instances-for-ec2-ready-for-your-memory-intensive-workloads/ whee
@a.espolov: here is an example using docker-compose with a kafka-writer . You can scale a number of peers using docker-compose scale. [email protected]:zamaterian/Onyx-kafka-writer.git
@zamaterian: thx) how to use https://github.com/zamaterian/Onyx-kafka-writer/blob/master/script/run_peers.sh for more peers?
@lucasbradstreet: are you aware of anybody working on a onyx-kinesis plugin ?
@lucasbradstreet: apparently I fell into the number of peer trap again.. I didn’t realise Kafka needed a minimum of number of peers equal to the number of partitions. I’ve chosen a fixed partition and now it seems to be reading
@jeroenvandijk: it should have logged an error. Let us know if you didn't see one in the logs
@bcambel: I'm not aware of one
@lucasbradstreet: This one you mean? Not enough virtual peers have warmed up to start the task yet, backing off and trying again…
Ah I now I found it in the logs clojure.lang.ExceptionInfo: :onyx/min-peers must equal :onyx/max-peers and the number of kafka partitions
You have to be quick though
hmm never mind, don’t know what happened before. Error is all over the place now
Yeah, that one. Easy to miss. If you load up the dashboard you should've been able to see that the job was killed and that this was the exception that caused it to be killed.
I think i’m reaching the max number of zookeeper connections now. Wasn’t there a way to share the zookeeper connection? Or not for onyx-kafka?
ah I guess not yet https://github.com/onyx-platform/onyx/issues/460
@jeroenvandijk: we're building issue 460 out now. You can increase the connection limit in your zookeeper config for the time being
@a.espolov: see the readme, updated it with a docker-compose scale entry