Fork me on GitHub
#onyx
<
2017-10-26
>
asolovyov08:10:45

is there a guide or an example how to update plugins from 0.9 to 0.11? 🙂 I want to update onyx-http, since we're using it and it's time to update Onyx finally, but onyx-plugin template is oriented towards 0.9... I guess a suggestion of a simpler plugin would suffice, I can read the code, it's just onyx-kafka is pretty complex 🙂

lmergen10:10:21

@asolovyov it's best to look at existing plugins that have been upgraded and see what's changed

lmergen10:10:09

it should be fairly easy though

lmergen10:10:23

perhaps take a look at onyx-sql or onyx-kafka to see what new plugins look like

asolovyov10:10:34

I'm looking at onyx-bookkeeper right now, since I've found comment in onyx-http that it was written after the former 🙂

parameme13:10:07

G'day all. There doesn't seem to be a #pyroclast channel but I just thought I would drop a line to say that http://pyroclast.io/docs/guides/getting-started.html suggested npm install -g pyroclast-cli is currently failing with

parameme13:10:21

npm ERR! code 1
npm ERR! Command failed: /usr/local/bin/git checkout develop
npm ERR! error: pathspec 'develop' did not match any file(s) known to git.
npm ERR!

parameme13:10:02

(Pulling the underlying git repo does indeed only result in a master branch).

michaeldrogalis15:10:43

@parameme I'll PM you -- thanks!

eelke15:10:31

I have a question. Is it possible to use windows and triggers that output data to another task in the workflow?

eelke15:10:02

With the :trigger/emit parameter maybe

michaeldrogalis16:10:42

@eelke Correct. :trigger/emit provides that functionality.

lmergen18:10:52

how do peers coordinate between each other which tasks they run, and/or which job to start working on ? is that communicated through the log ?

michaeldrogalis18:10:05

$ l onyx/src/onyx/log/commands/

lmergen18:10:28

that's what i was looking for 🙂

michaeldrogalis18:10:45

In particular look out for the lines (reconfigure-cluster-workload replica)

lmergen18:10:10

let me see

lmergen18:10:34

trying to figure out whether i can observe from the log which jobs are currently being processed, and which ones are still in queue

michaeldrogalis18:10:04

Under the replica there's a few keys - :jobs, :completed-jobs, :allocations

lmergen18:10:28

let me see

lmergen18:10:28

good to know

lmergen18:10:44

i'm finally getting around writing that onyx log -> datomic relay

lmergen18:10:14

it appears to be a good strategy

michaeldrogalis18:10:28

Yeah its super 🙂

Travis19:10:33

Hey Guys, The battle continues with the Media Driver timeout. I separated out the media driver into a separate container in the same pod and got all of that working. One thing I noticed is in the onyx dashboard is shortly after submitting the job ( still idle no data to consume ) I start getting lots of events about peers leaving the cluster and joining ( :leave-cluster, prepare-join-cluster, :add-peer ). This goes on for a bit until eventually a time out occurs.

michaeldrogalis21:10:57

@camechis It sounds like the networking between your peer pods are having trouble communicating with each other on the ports they expose.

Travis21:10:43

that was a thought, just have no idea why that would be

Travis21:10:43

could the check point storage come into play at all with this issue ? Want to make sure our GCS implementation is not causing this

michaeldrogalis22:10:16

@camechis Unlikely unless you see stack traces in the onyx log's indicating that your storage impl is having issues.

Travis22:10:04

Ok, good there. I switched to one physical peer and getting much better results. Think it's communication between the pods

michaeldrogalis22:10:37

Sounds right. If things appear okay and as soon as activity starts with a job you start seeing peers dropping off, it's likely that they tried to open connections to on another and the network channels do it aren't open.

Travis22:10:13

Yeah, I also see lots of errors on missing subscriptions

gardnervickers22:10:08

How are you setting your bind addr?

gardnervickers22:10:44

Should have something like

- name: BIND_ADDR
            valueFrom:
              fieldRef:
                fieldPath: status.podIP

gardnervickers22:10:16

Also your media driver needs UDP open

ports:
          - containerPort: 40200
          - containerPort: 40200
            protocol: UDP
@camechis

Travis22:10:45

Do I need to set the host Port?

Travis22:10:38

Don't think that makes sense now that I think about it