Fork me on GitHub
#aleph
<
2016-10-06
>
jeroenvandijk07:10:48

I’ve done some catching up with reading. I think i understand the idea behind bulkheads now 🙂

jeroenvandijk07:10:09

i think both core.async and manifold are suitable as they both allow you to run things on dedicated threadpools

jeroenvandijk08:10:09

I don’t think erlang is supporting the bulkheads out of the box. As the actors itself might restart, but with erlang you can also have actors in a distributed deadlock if you are not careful

dm308:10:23

I guess I don't see how you can implement the "generic" bulkheads - don't find the definition that useful

jeroenvandijk08:10:46

if by generic you mean can never fail, I agree I don’t see how you can implement that either

dm308:10:02

in Hystrix a bulkhead just limits the amount of concurrent calls to the component

lmergen08:10:18

i thought in Hystrix you had a thread (pool) per component ?

dm308:10:52

that's one way to limit the amount of calls

dm308:10:57

the other is using semaphores

dm308:10:47

anyway, I don't see it as an implementable pattern, but rather an implementation approach in a more general sense

dm308:10:03

but this obviously depends on what you take as the definition

jeroenvandijk08:10:32

this book, preview, original probably better, has some useful context too https://www.nginx.com/wp-content/uploads/2015/01/Building_Microservices_Nginx.pdf

jeroenvandijk08:10:35

But we don’t really do microservice or SOA so most of the things do not apply to my application. Though general principles, especially in the Nygard book are useful as a rule of thumb to prevent cascading failures

jeroenvandijk08:10:50

What I find appealing at Hystrix at the moment is it’s out of the box monitoring and latency control. And because I don’t want to be trapped in a Java like architecture and I want some custom controls I’m trying to build my own version

jeroenvandijk08:10:41

I’m planning to use this for latency control too (if I get a patch in): https://github.com/ztellman/aleph/issues/273

jeroenvandijk08:10:55

We had some queueing issues 🙂

dm308:10:43

did you try using executor stats?

jeroenvandijk08:10:16

no I didn’t know about those before. However, i think those are aggregates? I need to know on a per request level

dm308:10:49

yeah, those will be aggregate

jeroenvandijk08:10:14

Since we have only 100ms or so, if you are already waiting for 75ms it might not be worth it to continue to take the offering into account

dm308:10:23

public enum Metric {
        QUEUE_LENGTH,
        QUEUE_LATENCY,
        TASK_LATENCY,
        TASK_ARRIVAL_RATE,
        TASK_COMPLETION_RATE,
        TASK_REJECTION_RATE,
        UTILIZATION
    }
what executor gives you

jeroenvandijk08:10:57

Thanks, looks useful to add to our monitoring as well

jeroenvandijk10:10:45

How do you handle errors in a websocket/sse stream? I have an error somewhere and cannot find it easily. I would like to wrap the stream with some error logging. manifold.deferred/catch doesn’t seem to work in this case

jeroenvandijk10:10:40

thought already i misunderstood the examples

jeroenvandijk10:10:12

from some manually testing it seams you have to wrap the put! call with error handling. Hmm will test a bit more

dm310:10:53

there's a bunch of cases where an error can happen inside some callback and the stream gets closed + the error gets logged

dm310:10:03

you can't get the error in any way

dm310:10:32

like the consume example

dm310:10:00

hm, maybe not zip exactly, as it can't really fail

dm310:10:15

if your xform throws

jeroenvandijk11:10:57

in this case I would be happy with a log, but haven't seen that either

jeroenvandijk11:10:20

i’m actually not sure what happens

jeroenvandijk11:10:02

ah yes configuring logging correctly helps 🙂

jeroenvandijk11:10:32

Would be nice though to have some way to control where the exception goes when the unexpected happens

dm311:10:35

you can upvote the issue 🙂

dm311:10:58

even better, write your experiences

lmergen12:10:34

exception management and async is always a gnarly issue. i think in manifold’s case, simply modularizing an error handler would be good enough indeed

lmergen12:10:26

@jeroenvandijk i found out yesterday that next week’s meetup is going to be about onyx, so that’s gonna be interesting!

jeroenvandijk12:10:39

Yes indeed 🙂

jeroenvandijk12:10:53

We are using it pre-production. What about you?

lmergen12:10:31

i haven’t been able to find a good enough reason to use it 😕

lmergen12:10:02

we’re mostly focused on AWS, and as such use Redshift for our data warehouse, and for the more complex processing we use Spark

lmergen12:10:06

both hosted by AWS

jeroenvandijk13:10:37

How do you get the data to S3/Redshift?

jeroenvandijk13:10:26

We use it to move data from our bidder cluster via kafka to s3 and other services

jeroenvandijk13:10:01

But it’s quite complicated and i wouldn’t recommend it if you have something simple working

lmergen13:10:11

we use Kinesis Firehose

lmergen13:10:14

which is like kafka

jeroenvandijk13:10:32

ah i see, so they manage the connection to Redshift i guess

jeroenvandijk13:10:44

we need to send data to other services too

lmergen13:10:15

yes, basically it caches on S3 until it reaches a certain threshold (every 900 seconds or 64MB is what we use) and then imports it into redshift

jeroenvandijk13:10:24

I would like to try Redshift too on our data

lmergen13:10:31

we then use AWS DataPipeline to query redshift

lmergen13:10:40

in fact, i don’t think anyone ever directly connects to redshift 🙂

lmergen13:10:45

(over here in our org)

jeroenvandijk13:10:55

no? not through a jdbc connection?

jeroenvandijk13:10:33

i hope to be able to have a SQL interface on our data via either redshift, drill or bigquery sometime

lmergen13:10:19

that was an article i wrote 6 months ago about our architecture

lmergen13:10:40

it’s pretty interesting how much you can do with the cloud nowadays

jeroenvandijk13:10:45

nice featured post 😎

dm313:10:43

that's interesting

jeroenvandijk13:10:51

i wouldn’t mind getting rid of kafka if something with similar performance would be offered on AWS, but i think kinesis is still a bit different. But it is amazing how well it all scales

lmergen13:10:27

it’s fascinating indeed

lmergen13:10:01

AWS is good for the start when your do not (yet) need very complex configurations, and you’re not into high volume yet

lmergen13:10:29

you do have to take into account that “eventual consistency” is very real

lmergen13:10:44

especially with things like S3

jeroenvandijk13:10:01

yeah and different in some regions too 🙂

jeroenvandijk13:10:07

but i think now they have fixed that

lmergen16:10:49

@jeroenvandijk: looks very sweet!