Fork me on GitHub
#onyx
<
2017-08-31
>
zirmite01:08:52

was just going to come here and ask if there were any k8s-helm charts for onyx and I found this: https://github.com/onyx-platform/charts ! i see that’s a work-in-progress, but great to see that’s in the pipeline.

michaeldrogalis01:08:34

@zirmite We’re using that in production for Pyroclast. Not particularly well documented, but it’s serving it’s purpose.

zirmite01:08:01

ah, so less WIP than I thought?

michaeldrogalis01:08:02

We have other clients using it, so if you see a problem do shout out since others have it in prod too. 🙂

zirmite01:08:11

i’ll give it a go

wildermuthn12:08:58

I’ve been using Spark the las few weeks, and recently have been looking at its concept of windows. Compared to Onyx, am I right in that Spark Streaming windows only trigger at the end of the window period, rather than Onyx’s global windows that can trigger (and emit) at any point defined in the Onyx window?

Travis13:08:40

It's been a little while but in onyx windows and triggers are separate. You can choose to emit a window using different trigger types

michaeldrogalis14:08:41

@wildermuthn What @camechis said. Our windows and triggers most closely resemble Google Flow’s. Next release will alter refinements to look more like Flink’s evictor’s abstraction

michaeldrogalis14:08:28

We’re going to be adding watermarks in the next release also - that’s come up as an immediate need.

jsonmurphy15:08:32

Hey everyone… I’ve just started getting my feet wet with Onyx and I’m trying to wrap my head around the behaviour of (virtual) peers. I started with the simple examples using the core.async plugin and that works fine. I tried to switch the input to onyx-durable-queue but unlike the core.async plugin the tasks never seem to complete and trying to run the job multiple times eventually uses up all the peers… (no virtual peers to start its execution)

jsonmurphy15:08:14

my question is… should i be starting and stopping peers for every job or perhaps i am using durable queue incorrectly?

lmergen15:08:27

i think the difference is that with the core.async plugin, there is an “end” to the job when the channel closes

michaeldrogalis15:08:37

@jsonmurphy I don’t think durable-queue has has been ported forward to Onyx 0.10.

jsonmurphy15:08:00

i am using Onyx 0.9.15

michaeldrogalis15:08:33

I’d recommend jumping to 0.10 - there are large internal changes between the two versions.

michaeldrogalis15:08:49

Is there a specific reason you like Durable Queue here? Asking since we haven’t brought it forward to 0.10

jsonmurphy15:08:30

I wanted something to queue tasks that isn’t as heavy as setting up kafka but still persistent over restarts

michaeldrogalis15:08:27

Hmm, yeah I guess we just got used to running Kafka in a container.

michaeldrogalis15:08:45

There’s no reason we can’t port DQ up to the next version other than time. 😕

jsonmurphy15:08:17

i see… i’ll explore the kakfa option then

michaeldrogalis15:08:43

Sounds good - do shout if you need help.

jsonmurphy15:08:07

sure… and to my original quesiton.. I’m just confirming that i should only need to start the peers once in all cases?

michaeldrogalis15:08:57

As long as your job finishes, peers will be reallocated to new jobs as work is received. If you’re running streaming jobs that never complete, the virtual peers will continue to be pinned to those jobs — unless you use some of the advanced scheduling features to dictate priority.

michaeldrogalis15:08:30

If you’re running tests, try with-test-env to quickly reboot the environment between runs.

jsonmurphy15:08:07

ok thanks for the help 🙂