This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-11-06
Channels
- # admin-announcements (59)
- # announcements (1)
- # beginners (67)
- # boot (140)
- # cljsrn (8)
- # clojure (70)
- # clojure-berlin (18)
- # clojure-dev (7)
- # clojure-russia (53)
- # clojurescript (124)
- # clojurescript-ios (3)
- # clojurewerkz (2)
- # clojurex (10)
- # code-reviews (42)
- # cursive (9)
- # datomic (2)
- # editors-rus (2)
- # emacs (5)
- # events (1)
- # hoplon (35)
- # jobs (8)
- # ldnclj (7)
- # lein-figwheel (34)
- # luminus (1)
- # om (410)
- # onyx (22)
- # overtone (19)
- # portland-or (6)
- # re-frame (1)
- # yada (4)
Hi @michaeldrogalis lucasbradstreet ... I am trying to switch over from core.async to kafka for reading messages. I've gone through the docs here: https://github.com/onyx-platform/onyx-kafka But onyx does not seem to be reading from the topic I established
I have checked and the messages have been put on the kafka topic referred to by my :read-messages
task in the catalog
@spangler: I'd need to see a bit of code and logs to help.
@yusup: do you mean that you want one task running on a peer on each physical node? There's no way to do this with our current scheduler I'm afraid
Ah. We've been discussing improving the scheduler to allow that, but it's actually quite complex to get right, and we want to do it right.
The best thing I have for you is to suggest you use bigger instances so that things are normally more evenly spread out
currently I can do submit a job to a cluster , each physical node only contains only single virtual peer .which basically achieves what I want.
You could use a second set of peers running on another onyx/id, and submit the other job to that cluster. It's not ideal though.
Especially since you only have one peer per node, because your job will stop if any of them dies
Yeah. Two "clusters" running on the same machine since onyx/id will logically separate them
It's not ideal and hopefully we can have a better solution for you in the future
Ha ok. That works too