This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-10-21
Channels
- # admin-announcements (42)
- # alda (1)
- # beginners (11)
- # boot (24)
- # boulder-clojurians (2)
- # cider (10)
- # cljs-dev (23)
- # clojure (63)
- # clojure-czech (4)
- # clojure-japan (2)
- # clojure-russia (44)
- # clojure-sg (2)
- # clojure-switzerland (2)
- # clojurescript (135)
- # community-development (5)
- # css (4)
- # cursive (19)
- # datomic (34)
- # emacs (2)
- # events (5)
- # funcool (13)
- # hoplon (3)
- # ldnclj (43)
- # ldnproclodo (1)
- # lein-figwheel (7)
- # luminus (7)
- # off-topic (54)
- # om (115)
- # onyx (82)
- # overtone (3)
- # re-frame (6)
- # reagent (15)
- # yada (5)
Onyx 0.7.11 has been released with an AOT bug fix.
@michaeldrogalis Okay! Everything is up and running, and I am launching onyx tasks successfully
and then those nodes go through a variety of steps, each of which is independent and can be done in parallel
So the question is, do I want to make each of these separate jobs? Or can they be different steps in the same workflow?
What do you mean when you say nodes?
In the workflow?
then once all the nodes are processed, some operations to do on the graph as a whole
So, there is a process that progressively generates nodes, sending them 30 at a time to be further processed
So you're saying your segments are themselves DAGs?
I didn't follow it closely enough to be able to give advice there. I'd need to see something more concrete.
Try one workflow, and see if there's anything preventing you from doing that.
Is there a way to send those 30 nodes onto the next stage in the workflow without ending the stage that is sending them?
I know how to send a sequence of segments to a job, but not to a task from another task
Basically I want something like an core.async channel that I can put segments onto from one task and have them be input to the next task in the workflow
You dont control the flow of segments between tasks, Onyx makes that invisible to you.
You can use an output task to dump segments onto external storage, and have an input task from another job that reads that external storage. @spangler
It's a little unconventional, but there's nothing stopping you from doing that.
Probably easier to start both jobs at the same time, no?
You will not see :done
at any stage in your tasks though. It will only appear at the end when everything is finished.
So no need to handle it in your functions
Sure thing.
2. If I run the job once it is fine, but if I have two of the same job running at the same time I run out of virtual peers
In that I get this message over and over again
Not enough virtual peers have warmed up to start the task yet, backing off and trying again...
This is with running about 50 virtual peers, which seems like a lot since my workflows only have a total of 9 steps in them
If I try to add more virtual peers I get this error on startup
org.apache.zookeeper.ClientCnxn - Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
Then I start getting a lot of these errors
15:44:30.500 [clojure-agent-send-pool-3-SendThread(localhost:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server localhost/127.0.0.1:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_67]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.7.0_67]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.7.0_67]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.7.0_67]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) ~[na:1.7.0_67]
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:66) ~[zookeeper-3.4.1.jar:3.4.1-1212694]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:291) ~[zookeeper-3.4.1.jar:3.4.1-1212694]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1041) ~[zookeeper-3.4.1.jar:3.4.1-1212694]
So I think zookeeper has some kind of limit that kicks in around 50-60 virtual peers?
and now on startup the process just hangs here
16:09:57.332 [clojure-agent-send-pool-3] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@154cfab2
16:09:57.354 [clojure-agent-send-pool-3-SendThread(127.0.0.1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
16:09:57.362 [clojure-agent-send-pool-3-SendThread(127.0.0.1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to 127.0.0.1/127.0.0.1:2181, initiating session
16:09:57.370 [clojure-agent-send-pool-3-SendThread(127.0.0.1:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x150675a060e085d, negotiated timeout = 40000
16:09:57.377 [clojure-agent-send-pool-3-EventThread] INFO o.a.c.f.state.ConnectionStateManager - State change: CONNECTED