Fork me on GitHub
#off-topic
<
2017-12-01
>
qqq00:12:11

also, (as I failed to realize earlier), they're ebooks 🙂

andy.fingerhut05:12:38

@qqq Perhaps Amazon has already done this, and you didn't notice? 🙂

qqq06:12:14

@andy.fingerhut: it's entirely possible AWS has already done this; my point was: going AWS/Serverless changes mental model so much that even if they made these drastic changes, I wouldn't know (or really care, besides "ohh, nice, compute prices dropped again")

dominicm08:12:57

@qqq are you using lambda with clojure? How are you achieving that?

danm09:12:45

@dominicm What would be difficult about it? Lambdas can run JVM bytecode, becuase Java was one of the first languages they supported. What language you write the original code in doesn't really matter. If it compiles to JVM bytecode you can run it in a Lambda

dominicm09:12:15

Slow startup is the problem in my understanding.

cgrand10:12:56

If people versed in meta interpreters are interested, I could explain how to have portkey for cljs

qqq10:12:03

Is this a real problem? In practice, this seems to only matter when you're going from 0 jvm containers to 1 jvm containers ;; all other times, you're hitting an already warmed up jvm

cgrand10:12:47

any time the number of containers is incremented, the 1st request to the new one lags

cgrand10:12:22

and aws periodically cycles containers (to prevent leaks of all kinds I guess)

qqq10:12:38

I was under the impression that at 50% or 80% utilization, new containers are created

qqq10:12:48

so unless you suddenly double # of requests, you're hitting a warm container

cgrand10:12:34

yes sure but still once in a while you’ll have a laggy one (because of cycling on/off)

cgrand10:12:01

it all depends on if you are looking at average or worst case

qqq10:12:03

I'm looking at probabilities, and I suspect you can prove something like: if you get more than 100,000 requests / second, than the expected # of requests that hits a cold JVM is < 10

qqq10:12:21

these numbers are sorta-made up, I haven't done the actual calculation

qqq10:12:02

it just seems that under normal circumstance, you're going ot hit a warm JVM, and the only time you hit a cold JVM is if there's a sudden spike where your spike in traffic outstripts AWS's ability to fire up new JVMs

cgrand10:12:48

P = coldstart-time (in seconds)/(Request-rate (per second) / N of containers * container-max-lifetime (in seconds))

cgrand10:12:45

(Simplified) P = coldstart-time (in seconds) N of containers/(Request-rate (per second) container-max-lifetime (in seconds))

qqq11:12:03

@cgrand: let's assume: 1. cold startup time is 5 seconds 2. AWS keeps lambda utilization at 80% (fires up new containers when over 80% utilization) then, I would make the argument that if your traffic is growing at < 25% every 5 seconds, everything hits warm start

qqq11:12:35

I think you're assuming that Lambda is always at full utilization, so whenever traffic grows, some of them hits cold starts.

qqq11:12:24

I'm assuming that AWS tries to keep lambda at most 80% utilized; (with this 20%) buffer -- and as long as traffic is growing at < 25% / 5 seconds, you'll never hit a cold start -- because the requests hit the warm 20% buffer while new JVMs are being spun up

qqq11:12:37

1. @cgrand is right, I was wrong 2. what I described is NOT lambda serverless; what I described is "ELB autoscaling" -- so my math is off -- because my mental model of how lambda scaling worked is off 3. @cgrand’s math is correcter

qqq11:12:47

[ this is after talking to aws business support ]

dominicm09:12:22

Depending on the problem you're solving 🙂

danm09:12:34

I spoke to someone at an AWS meetup the other week about that actually

danm09:12:01

As long as you're running your Lambdas quite frequently (like > once per 5 mins) you only pay the slow startup time on first run

danm09:12:51

They do something, that I admit I don't really understand fully yet as I've not done much with them, so that they're not reloading the code every time you call it unless the Lambda has gone 'cold'. So you don't pay the slow-start

danm09:12:25

The downside of this, apparently, is that Lambdas have 500MB of scratch disk to store temp files in and such, and this is not cleaned between runs if the Lambda has been kept hot for your script, only when it goes cold and someone else might get it

danm09:12:21

So the person I was speaking to was trying to do some secure stuff and finding files they didn't expect that were a) tainting their results, and b) possibly allowing information leak between runs

qqq10:12:39

the 'tree-shaking' algorithm is amazing

cgrand10:12:50

Let’s say it’s promising foundations

cgrand10:12:50

Let’s say it’s promising foundations

qqq10:12:11

@cgrand portkey SLA is pretty good too: https://github.com/portkey-cloud/portkey/issues/36 Oct 15: I post error using portkey in boot repl (portkey originally designed to be used with lein) Oct 16: issue fixed

borkdude10:12:16

@qqq re “awd re-invent = worst nightmare” did you mean this conference? https://reinvent.awsevents.com/agenda/?trk=www.google.com

qqq10:12:33

@borkdude: not sure; one day, 1/3rd of news.yc frontpage was "AWS ReInvent releases feature XYZ", and each of those feature killed off some class of startups

cgrand10:12:56

If people versed in meta interpreters are interested, I could explain how to have portkey for cljs

borkdude10:12:43

@qqq so “AWS re-invent” as some generic name for a startup who is providing some service on top of AWS?

qqq10:12:41

@borkdude: sorry, I mis read your question, you're right, by re-invent, I meant https://reinvent.awsevents.com/

qqq10:12:58

in particular, the conference where AWS announces a bunch of new AWS features

derpocious18:12:43

I know I'm really late to the conversation, but I'm using ClojureScript for AWS Lambda running on Nodejs, not jvm

qqq21:12:01

Let X be any set. Let Y be a set of subsets of X where union over elements of Y covers all of X (possibly with duplicates). Is there a formal math term for this For ecample, if x = {1, 2 3 4} we can have Y = {{1 2 3}, {2, 4}}

phronmophobic21:12:20

>Covers are commonly used in the context of topology. If the set X is a topological space, then a cover C of X is a collection of subsets Uα of X whose union is the whole space X.

phronmophobic21:12:34

maybe subcover is more appropriate as I think a cover can include items that aren’t in X

qqq23:12:04

@smith.adriane: thanks! after reading over the defs, 'cover' is what I' mlooking for, as I actually want to allow elems of Y to contain elems not in X

phronmophobic23:12:42

awesome! glad it helped. the more I read the page, the more confused I got