Fork me on GitHub
#onyx
<
2017-04-12
>
yonatanel10:04:17

Have you guys ever surveyed kafka-fast client? I know these days you just use the java API directly but if you have any experience report with kafka-fast it could be nice. https://github.com/gerritjvv/kafka-fast

jasonbell14:04:12

I have now 🙂

souenzzo15:04:31

Hello. I'm trying to understand onyx clustring. Where does peer's actualy run? It's on my application jvm? On zookeeper jvm?

souenzzo15:04:00

If I want to send all peers to a "slave machine" and keep my "main" app without peers, who to?

michaeldrogalis15:04:19

@souenzzo I walk through the coordination architecture in depth here: https://www.youtube.com/watch?v=KVByn_kp2fQ There’s also the architecture section of the user guide: http://www.onyxplatform.org/docs/user-guide/latest/#low-level-design

michaeldrogalis15:04:37

Peers are processes that run inside their own JVM.

michaeldrogalis15:04:03

You don’t “send” peers to anywhere. You send jobs to ZooKeeper, and peers fetch jobs from ZK in turn.

souenzzo15:04:51

So I can create a "peer-app" with just "onyx" dep. In this I will (peer/start-peer 100 peer-config env-config) And in my "main-app", with tons of deps, I will (onyx.api/submit-job peer-config job)

souenzzo15:04:43

My peer-app can be static (of course, same onyx/jvm/clojure version)

michaeldrogalis15:04:56

@souenzzo You create a Leiningen application with Onyx as a dependency, as well as anything else your app needs. Write a main that starts a number of virtual peers. That will start N pools of processing threads per JVM. Stand up as many instances of that across different machines as you’d like. They’ll all know how to talk to each other through ZooKeeper.

michaeldrogalis15:04:02

Use onyx.api/submit-job to submit work to the cluster.

michaeldrogalis15:04:20

There is no jar-level transferring in Onyx.

souenzzo16:04:27

Right I will main-app and main-peer. But now: I have a job that "never finish" (datomic). When I make a deploy, I want to "update" this job, so I will manually send a "kill" to this job before stop aplication (in deploy stop), right??

michaeldrogalis16:04:39

@souenzzo Your application and Onyx are independent. They’re running on different processes, presumably on different machines. onyx.api/kill-job will bring a job down. onyx.api/submit-job to bring a new one back up.

lmergen21:04:13

@souenzzo to give you a bit of an idea, i’ve written a ‘streamer’ tool for myself, which handles these kind of tasks

lmergen21:04:22

(that’s just a ns)

lmergen21:04:50

so the submitting / killing of a job, and the actual running of the job, are two completely independent things

michaeldrogalis21:04:37

@lmergen Yep, that’s exactly how it’s meant to be used. We defer management and deployment to other tools to gain more flexibility.

lmergen21:04:44

yes, simplicity 🙂

lmergen21:04:17

had to get used to that a bit, i’m still coming from a hadoop / spark mindset