Fork me on GitHub
#onyx
<
2018-07-14
>
lucasbradstreet17:07:59

Are you seeing any peer timeout messages in your logs?

sparkofreason17:07:52

Bumped up the number of peers, and it seems happier.

lucasbradstreet18:07:19

Right. Those unavailable images are more of a symptom of the peers timing out. I may remove that message since it’s not particularly helpful. What was probably happening was a single peer was doing too much work and didn’t have a chance to heartbeat in time to not be timed out

lucasbradstreet18:07:34

If they’re doing lots of work you may want to reduce the batch size and/or increase the timeouts

lucasbradstreet18:07:56

I’ll improve that message too. What’s happening there is you are generating too many segments in one pass. So maybe you have a batch size of 200, and each segment is generating over 100 segments each. You will end up with over 20000 segments which is overflowing the preallocated buffer.

sparkofreason18:07:14

Could it occur just because aeron fell behind? I have a custom input generating segments, and there's a single aggregation task downstream that emits on a timer trigger. The custom input will definitely output more than 20K total segments, though.

lucasbradstreet18:07:32

Total segments is fine. I think this is emitting that many segments in a single pass. Is it possible that you’re trigger emitting 20000 messages from via a single trigger/emit?

sparkofreason18:07:38

Don't think so. The emit function returns a single map.

sparkofreason18:07:14

There could be a lot of windows active.

lucasbradstreet18:07:34

Right, I was about to say if there are more than 20000 windows that are emitting at the same time that could be a problem too.

lucasbradstreet18:07:12

This is especially a problem for the timer trigger since it can end up firing for all windows at the same time.

sparkofreason18:07:07

trigger/fire-all-extents? is false. Should that make a difference?

lucasbradstreet18:07:31

For timer triggers it’s global so that will apply anyway.

lucasbradstreet18:07:49

I mean it’ll still fire all extents since the timer trigger is global

lucasbradstreet18:07:18

I’m trying to think of a better strategy for this situation

sparkofreason18:07:44

Hmmm. So I'm actually running a custom trigger. Is there something I can do in that implementation?

lucasbradstreet19:07:03

Hmm. You’ve returned true for whether it should fire, which means that all windows will be flushed. I think we could either make it so that the messages are written out in multiple phases, or we could increase the buffer size, or possibly we could give you some way of ensuring the number of windows doesn’t grow too big before flushing.

lucasbradstreet19:07:37

I’m leaning towards the last option, as generally the timer is supposed to put an upper bound on how much is buffered up before you flush, but if you have built a lot of segments up to emit you may want to flush early.

lucasbradstreet19:07:51

Would that option work for your use case?

sparkofreason19:07:12

The trigger logic is supposed to work as a combination of segment and timer, It is supposed to trigger in a time period only if a new segment was received. So, new segment arrives, starts the clock, after which any further segments have no effect. Once the timer fires the state is reset, so unless a new segment is received the clock will not start again. My reasoning was to avoid the situation where a lot of windows would fire with no changes.

lucasbradstreet19:07:32

OK, right, that’s not working correctly then. I think what’s happening is we’re defaulting to fire-all-extents? on all non segment triggers and that’s causing you issues. Are you on 0.13.0?

lucasbradstreet19:07:06

I can send you a snapshot to see if we can fix it by respecting fire-all-extents? and then add validation on all of the trigger types where it fire-all-extents? must be true

lucasbradstreet19:07:29

I have to run out for a sec. I’ve pushed a snapshot which respects fire-all-extents? for all trigger types 0.13.1-20180714.191549-15

sparkofreason19:07:37

Yes, I am on 0.13.0

lucasbradstreet19:07:41

if you want to test it out and let me know how you go, I can figure out the right way to make the change

lucasbradstreet19:07:16

We haven’t had anyone create a composite timer/segment type trigger so this hadn’t come up yet.

sparkofreason19:07:34

Thanks, I'll give it a whirl after lunch.

lucasbradstreet19:07:06

sure thing. Lunch for me too

lucasbradstreet19:07:36

If this turns out to be the problem I’ll be pretty happy as the 20000 segment per pass issue was a bit of a smell for a streaming job

sparkofreason20:07:00

Looks like that was the answer. Running much faster, with far fewer restarts, at least so far.

lucasbradstreet20:07:28

Great. Yeah, I could see a lot of bad behaviour coming from that. I’ll have a think about how to make that change right.

lucasbradstreet20:07:44

I think with more validation or settings on the trigger implementation side it should work out well.

sparkofreason20:07:39

These windows are all time-based, so once time has passed the extent should I evict them? Just clicked that perhaps that's the point of watermark triggers.

lucasbradstreet20:07:48

Yeah, that’s the point of watermark triggers. You could add something like that to your trigger + input plugin

lucasbradstreet20:07:20

Pretty much have to evict at some point if you have long running streaming jobs. Otherwise you’ll just keep adding state.

lucasbradstreet21:07:46

Oh, you probably haven’t implemented the watermark protocol on your input plugin.

lucasbradstreet21:07:04

The input plugin is responsible for feeding timestamps down through the pipeline

lucasbradstreet21:07:02

The way it works is that all of the segments will be between two barriers, each with their own timestamp. This is so that mismatched watermarks from input sources can take the minimum of each

sparkofreason21:07:26

I was going off this in the docs: "Trigger only fires if the value of :window/window-key in the segment exceeds the upper-bound in the extent of an active window. " Is that no longer valid?

lucasbradstreet21:07:15

That’s no longer valid now that we have a better way of doing watermarks. I’ll fix the doc. Thanks

sparkofreason21:07:14

Makes sense, and though I only have a single input for my simulation case, that may not hold in production.

lucasbradstreet21:07:16

assign-watermark-fn also works if your data may change for a given input plugin

sparkofreason22:07:54

So it looks like the :fire-all-extents patch broke watermarks. But for my immediate purposes, it doesn't matter. The reason I wound up with the segment/timer trigger and large number of active windows was because I didn't grok the watermark/eviction connection. Looks like I can use a combination of OOB timer and watermark triggers to have the desired outcome. Thanks for all of your help.

lucasbradstreet22:07:51

That makes sense too. Cool. I’ll think about what we should do with the fire-all-extents change in the future, but for now I won’t make any changes there.