Fork me on GitHub
#onyx
<
2016-05-18
>
jeroenvandijk09:05:29

That’s great, thanks

jeroenvandijk10:05:18

I’m trying to read from Kafka from a local cluster. I’m not expecting it would work right away, but I don’t see any error messages either

jeroenvandijk10:05:43

I see this for instance Peer chose not to start the task yet. Backing off and retrying. Does this “decision” indicate an error?

jeroenvandijk10:05:01

It also seems the Peers aren’t available anymore for others Jobs either. I’ve killed the Kafka job and i’m trying to run one that is normally working, but the peers seem to be preoccupied (`Not enough virtual peers have warmed up to start the task yet, backing off and trying again…`)

lucasbradstreet10:05:12

@jeroenvandijk: Usually that’s indicative that all the peers haven’t finished starting up. Perhaps it’s trying to connect to ZK/Kafka and the peer reading from it is waiting to time out? That might explain why killing the job isn’t working, because it’s still timing out?

lucasbradstreet10:05:55

The way it works is that all the peers perform their initialization logic and then emit a signal that the peer is ready. When enough peers have started so that the job is covered, it will start. So the likely cause is that a critical task has a peer that is hanging on a start task lifecycle

zamaterian11:05:46

@a.espolov: here is an example using docker-compose with a kafka-writer . You can scale a number of peers using docker-compose scale. [email protected]:zamaterian/Onyx-kafka-writer.git

bcambel11:05:43

@lucasbradstreet: are you aware of anybody working on a onyx-kinesis plugin ?

jeroenvandijk11:05:00

@lucasbradstreet: apparently I fell into the number of peer trap again.. I didn’t realise Kafka needed a minimum of number of peers equal to the number of partitions. I’ve chosen a fixed partition and now it seems to be reading

lucasbradstreet11:05:57

@jeroenvandijk: it should have logged an error. Let us know if you didn't see one in the logs

lucasbradstreet11:05:13

@bcambel: I'm not aware of one

jeroenvandijk11:05:39

@lucasbradstreet: This one you mean? Not enough virtual peers have warmed up to start the task yet, backing off and trying again…

jeroenvandijk12:05:54

Ah I now I found it in the logs clojure.lang.ExceptionInfo: :onyx/min-peers must equal :onyx/max-peers and the number of kafka partitions

jeroenvandijk12:05:02

You have to be quick though

jeroenvandijk12:05:06

hmm never mind, don’t know what happened before. Error is all over the place now

lucasbradstreet12:05:50

Yeah, that one. Easy to miss. If you load up the dashboard you should've been able to see that the job was killed and that this was the exception that caused it to be killed.

jeroenvandijk12:05:00

I think i’m reaching the max number of zookeeper connections now. Wasn’t there a way to share the zookeeper connection? Or not for onyx-kafka?

lucasbradstreet12:05:54

@jeroenvandijk: we're building issue 460 out now. You can increase the connection limit in your zookeeper config for the time being

zamaterian13:05:35

@a.espolov: see the readme, updated it with a docker-compose scale entry