Fork me on GitHub
#onyx
<
2018-04-12
>
lmergen12:04:40

i'm implementing a plugin for google pubsub on onyx, which works slightly different than kafka. specifically, rather than using a consumer offset, it wants you to manually ack each individual message. additionally, it has a stateful "subscription" in the backend, which automatically recovers. i'm looking at how to combine this with onyx' barrier snapshotting, and i think the right semantic would be to keep all messages "in flight" until checkpointed! is called, where i will commit all of them in a single operation. i think the sqs plugin has a different approach, where it tags each of the messages in flight with the epoch it belongs to, or something like that. is this correct ?

Travis12:04:27

Be a cool plugin, we also run on gcp

lmergen12:04:22

yes it's needed

lmergen12:04:32

i'll contribute it back to OnyxProject when i'm done

lmergen12:04:46

could use some help with QA

dbernal12:04:15

@matt.t.grimm I'm currently running a job that does exactly what you're describing. I read from a SQL table in a partitioned fashion and then several tasks in my job form queries against other tables in the database that contain additional information that I need

Matt Grimm15:04:24

@dbernal Do you ensure that your I/O is idempotent? My understanding is that a peer may re-run jobs as it replays the event log (from http://lbradstreet.github.io/clojure/onyx/distributed-systems/2015/02/18/onyx-dashboard.html)

lucasbradstreet15:04:18

We can’t ensure that IO is idempotent, but we do ensure that any windows are rolled back to a consistent checkpoint when failures occur, so you can achieve exactly once processing (as opposed to exactly once side effects).

lucasbradstreet15:04:33

@lmergen I think the idea should be similar to SQS. You need to tag which epoch a message belongs to so that when an epoch has successfully been checkpointed you can ack the correct results.

lucasbradstreet15:04:14

@lmergen it’s impossible to get exactly once behaviour with this scheme as it’s possible for the checkpoint to finish, but you are unabled to ack the messages before the peer fails.

lmergen15:04:23

yes that is what i'm going after

lucasbradstreet15:04:24

You will get at least once behaviour though.

lucasbradstreet15:04:38

Seems like a fine approach then 🙂

lmergen15:04:52

cool, i think this plugin is long overdue 🙂

lmergen15:04:15

apparently the whole clojure community is on AWS

lucasbradstreet15:04:51

There’s a GCS checkpointing implementation waiting for extraction by others.

lucasbradstreet15:04:07

Really need to get around to freshening up the onyx-plugin template to make life easier for plugin devs.

Matt Grimm16:04:16

Does the guarantee of exactly once processing imply that when a new peer comes online, replays the event log, and potentially reruns jobs, that it will not process segments/segment trees that have already been fully processed? In other words, if a job previously completed successfully, would its replay on the new peer amount to a no-op?

lucasbradstreet17:04:16

When a peer initially replays the scheduler log it will not be scheduled for any processing, as its join/scheduling related log entries will not exist in the past

lucasbradstreet17:04:36

So if a job had completed previously it’s not a problem - the new peer won’t do anything related to that job.

lucasbradstreet17:04:57

It’s initially trying to get caught up with the current state of the cluster - not take any commands from it.

Matt Grimm17:04:48

@lucasbradstreet Ah, OK, makes sense. Thank you!

sparkofreason18:04:35

Are there reasons it could be better to run lots of small jobs communicating over external channels as opposed to one big one communicating internally via aeron? Currently doing the latter in a single-machine test environment, lots of CPU even when there's no segments, and it crashes under any load.

lucasbradstreet18:04:58

How big are we talking / how many peers?

lucasbradstreet18:04:36

I could see it being better to decouple jobs sometimes, but it comes with downsides especially around consistent checkpoints.

lucasbradstreet18:04:25

OK, yeah, I can see how that would be a problem. Onyx isn’t configured by default to work well with that many peers on one box.

lucasbradstreet18:04:06

Usually it’s a smell that there are too many tasks doing too little.

sparkofreason18:04:24

Thanks, I think I can reduce that number, I'm probably hand-wiring something better handled with onyx grouping or something like that.

lucasbradstreet18:04:33

Yeah, often we see a lot of linear pipelines of A->B->C->D->E doing too little that would be better off collapsed. If you have trouble rationalising them let me know and I can give you some advice tuning it for a single box.