This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-11-02
Channels
- # admin-announcements (15)
- # aws (35)
- # beginners (6)
- # boot (183)
- # cider (51)
- # clara (17)
- # cljs-dev (32)
- # clojure (67)
- # clojure-dev (7)
- # clojure-india (1)
- # clojure-japan (3)
- # clojure-norway (1)
- # clojure-russia (26)
- # clojurescript (85)
- # clojurex (4)
- # community-development (1)
- # cursive (18)
- # data-science (1)
- # datomic (46)
- # devcards (29)
- # events (7)
- # funcool (21)
- # hoplon (10)
- # ldnclj (2)
- # lein-figwheel (16)
- # off-topic (60)
- # om (37)
- # onyx (8)
- # re-frame (23)
- # reagent (5)
- # yada (6)
@cfleming: what are you thinking about lambda-ing?
btw there is #C09N0H1RB
@alandipert: True, although I started talking about the profits involved rather than the technical bits, and the conversation degraded from there
@alandipert: Mostly things like low-volume API servers. One example would be licence generation when someone buys Cursive (via webhook from payment processor), or receiving exception reports when people submit them from the IDE.
Similarly for IDE usage stats, they’re only sent once a week per user - right now they get POSTed to nginx which puts the POST body in a log, and I download it and parse it once in a while. It’s pretty ghetto, Lambda would be nice for that.
@alandipert: Are you lambda-ing?
@cfleming: indeed... but nothing i wrote. we deploy a lambda function that amazon open-sourced to shovel data from S3 into redshift every time an s3 event fires
i have investigated using it to deploy our ad-tier web things, currently node on EC2
but we decided that was a no-go, mostly because lambda doesn't have a disk, and we need to spool if dynamo is down
common enough to be a concern
well, it was down down recently... for like a week it was highly unstable
but it can also be effectively down because it's throttling us
we could, but it would still involve network... and we also see flakiness there
basically these impression requests coming in are our data of record and we design to lose as few of them as we can
(they're how everyone gets paid)
Yeah, you’re not filling me with confidence that I could use it for licence generation, for example.
the direction we want to go is that nothing directly reads or writes from network or AWS service, but from a local disk-backed queue
well, i think that would be a good use-case
since you own the client and can retry
we don't own the clients and never get the impression back
yeah i still thing lambda would be worth trying in your case
were it not that the error log situation is awful
so, when lambda problems do happen it's difficult to diagnose and fix
not that i'm aware of
yeah that's what i would do if i didn't need to scale beyond 1 machine
keep all the logs on 1 box, easily see what happened/what's going on
I can always do an OOB thing like send an email when things go wrong, since the amount of data is small and errors would (supposedly) be infrequent
the other lambda motivation for us is more precise autoscaling
also true
Lambda seems like it ought to allow me to get rid of the VPS entirely - serve the static site out of S3, and any moving parts can be lambdas
it's definitely worth at least trying, it really got the creative juices flowing
a weird and new way to build apps
I’d have to run a JVM one (Kotlin, probably) for licence generation because of the encryption, but I’d be interested to try CLJS for the error catching case.
for the errors, you could put them on kinesis... and then have other lambdas that consume from kinesis
@alandipert: @cfleming I'm hearing about outages now and then on AWS. What if your whole company runs on AWS... how does that work?
Bigger companies can fail over between availability zones etc, and can probably handle significant failures in a particular zone
Every so often (once a year?) AWS has a significant cross-zone outage, and no-one can see Netflix
@borkdude: yeah... gotta design for pieces of AWS going down in particular regions. predecessors at my company did not, but we're working on it