Fork me on GitHub
#onyx
<
2017-02-02
>
dspiteself17:02:19

what is the best way to aggregate input to bundle for batched http requests?

dspiteself17:02:10

Currently we use the http response to update our database status later down the pipeline.

dspiteself17:02:51

a :trigger/sync seems terminal

michaeldrogalis17:02:38

@dspiteself You’re looking to send the result of an aggregate to a downstream task?

dspiteself17:02:34

i know that probably interacts with the timeouts

michaeldrogalis17:02:43

@dspiteself The old way used to be to route it through another input. We have support for this in 0.10. I’m not sure if the patch for it made it into the first alpha, but if not we have all the hard stuff figured out.

michaeldrogalis17:02:08

Yeah, that was the difficult piece to get in place - what happens when there’s a downstream failure.

dspiteself17:02:38

how do you "route it through another input"?

dspiteself17:02:50

use kafka plugin

michaeldrogalis17:02:53

We’d usually put it on Kafka

michaeldrogalis17:02:05

It’s cumbersome, but it got the job done.

michaeldrogalis17:02:33

But anyway, check with @lucasbradstreet when he’s around. I’ve been tied down in a lot of meetings lately, sorry I can’t give you a firm answer.

michaeldrogalis17:02:59

The API for triggers is augmented with the ability to pass the aggregate value down to any other immediate downstream tasks.

dspiteself17:02:51

oh that is released in 0.10.0

michaeldrogalis17:02:57

@dspiteself We have alpha releases out for 0.10.

dspiteself20:02:26

@lucasbradstreet would love to know how to pass the aggregate value down to any other immediate downstream tasks when you get a chance. I tried to find it in the code and there is enough indirection it is difficult.

lucasbradstreet20:02:55

@dspiteself @michaeldrogalis alas, we don’t have an API for that yet. I’ve held off so far, as the messaging code needs to be non-blocking, so it’s not as simple as just calling the messenger from a trigger.

lucasbradstreet20:02:20

I’ll see if I can come up with something temporary at least.

dspiteself20:02:29

no dont worry

michaeldrogalis20:02:32

Ah, I thought that portion was complete. Sorry for the misinformation.

dspiteself20:02:47

I will use micheal's answer

lucasbradstreet20:02:02

It won’t be a lot of work once we know what the API should look like.

dspiteself20:02:02

any queue that will not require more ops work?

dspiteself20:02:49

yea we are in google cloud kafka will be fine

dspiteself20:02:16

if you do drop a feature like that I would love a ping. 🙂

michaeldrogalis20:02:01

Yep, shouldn’t be much longer. Been in the making for ages.