This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-10-06
Channels
- # aleph (79)
- # bangalore-clj (3)
- # beginners (49)
- # boot (74)
- # cider (10)
- # cljs-dev (21)
- # cljsrn (2)
- # clojure (105)
- # clojure-berlin (1)
- # clojure-brasil (1)
- # clojure-dusseldorf (1)
- # clojure-korea (1)
- # clojure-poland (3)
- # clojure-russia (38)
- # clojure-spec (146)
- # clojure-uk (20)
- # clojurescript (70)
- # cloverage (1)
- # component (1)
- # core-async (23)
- # css (16)
- # cursive (22)
- # datascript (1)
- # datomic (22)
- # defnpodcast (6)
- # emacs (60)
- # events (1)
- # hoplon (94)
- # jobs (1)
- # jobs-rus (13)
- # luminus (11)
- # off-topic (11)
- # om (48)
- # onyx (5)
- # proton (7)
- # re-frame (87)
- # reagent (39)
- # rethinkdb (1)
- # ring-swagger (14)
- # rum (6)
- # specter (14)
- # untangled (105)
- # vim (6)
- # yada (22)
I’ve done some catching up with reading. I think i understand the idea behind bulkheads now 🙂
i think both core.async and manifold are suitable as they both allow you to run things on dedicated threadpools
I don’t think erlang is supporting the bulkheads out of the box. As the actors itself might restart, but with erlang you can also have actors in a distributed deadlock if you are not careful
I guess I don't see how you can implement the "generic" bulkheads - don't find the definition that useful
if by generic you mean can never fail, I agree I don’t see how you can implement that either
anyway, I don't see it as an implementable pattern, but rather an implementation approach in a more general sense
this book, preview, original probably better, has some useful context too https://www.nginx.com/wp-content/uploads/2015/01/Building_Microservices_Nginx.pdf
But we don’t really do microservice or SOA so most of the things do not apply to my application. Though general principles, especially in the Nygard book are useful as a rule of thumb to prevent cascading failures
What I find appealing at Hystrix at the moment is it’s out of the box monitoring and latency control. And because I don’t want to be trapped in a Java like architecture and I want some custom controls I’m trying to build my own version
I’m planning to use this for latency control too (if I get a patch in): https://github.com/ztellman/aleph/issues/273
We had some queueing issues 🙂
no I didn’t know about those before. However, i think those are aggregates? I need to know on a per request level
Since we have only 100ms or so, if you are already waiting for 75ms it might not be worth it to continue to take the offering into account
public enum Metric {
QUEUE_LENGTH,
QUEUE_LATENCY,
TASK_LATENCY,
TASK_ARRIVAL_RATE,
TASK_COMPLETION_RATE,
TASK_REJECTION_RATE,
UTILIZATION
}
what executor gives youThanks, looks useful to add to our monitoring as well
How do you handle errors in a websocket/sse stream? I have an error somewhere and cannot find it easily. I would like to wrap the stream with some error logging. manifold.deferred/catch
doesn’t seem to work in this case
ah thanks
thought already i misunderstood the examples
from some manually testing it seams you have to wrap the put!
call with error handling. Hmm will test a bit more
there's a bunch of cases where an error can happen inside some callback and the stream gets closed + the error gets logged
in this case I would be happy with a log, but haven't seen that either
i’m actually not sure what happens
ah yes configuring logging correctly helps 🙂
Would be nice though to have some way to control where the exception goes when the unexpected happens
exception management and async is always a gnarly issue. i think in manifold’s case, simply modularizing an error handler would be good enough indeed
@jeroenvandijk i found out yesterday that next week’s meetup is going to be about onyx, so that’s gonna be interesting!
Yes indeed 🙂
We are using it pre-production. What about you?
we’re mostly focused on AWS, and as such use Redshift for our data warehouse, and for the more complex processing we use Spark
How do you get the data to S3/Redshift?
We use it to move data from our bidder cluster via kafka to s3 and other services
But it’s quite complicated and i wouldn’t recommend it if you have something simple working
ah i see, so they manage the connection to Redshift i guess
sounds good
we need to send data to other services too
yes, basically it caches on S3 until it reaches a certain threshold (every 900 seconds or 64MB is what we use) and then imports it into redshift
I would like to try Redshift too on our data
no? not through a jdbc connection?
i hope to be able to have a SQL interface on our data via either redshift, drill or bigquery sometime
nice featured post 😎
i wouldn’t mind getting rid of kafka if something with similar performance would be offered on AWS, but i think kinesis is still a bit different. But it is amazing how well it all scales
AWS is good for the start when your do not (yet) need very complex configurations, and you’re not into high volume yet
yeah and different in some regions too 🙂
but i think now they have fixed that
@jeroenvandijk: looks very sweet!