Fork me on GitHub

also, (as I failed to realize earlier), they're ebooks 🙂


@qqq Perhaps Amazon has already done this, and you didn't notice? 🙂


@andy.fingerhut: it's entirely possible AWS has already done this; my point was: going AWS/Serverless changes mental model so much that even if they made these drastic changes, I wouldn't know (or really care, besides "ohh, nice, compute prices dropped again")


@qqq are you using lambda with clojure? How are you achieving that?


@dominicm What would be difficult about it? Lambdas can run JVM bytecode, becuase Java was one of the first languages they supported. What language you write the original code in doesn't really matter. If it compiles to JVM bytecode you can run it in a Lambda


Slow startup is the problem in my understanding.


If people versed in meta interpreters are interested, I could explain how to have portkey for cljs


Is this a real problem? In practice, this seems to only matter when you're going from 0 jvm containers to 1 jvm containers ;; all other times, you're hitting an already warmed up jvm


any time the number of containers is incremented, the 1st request to the new one lags


and aws periodically cycles containers (to prevent leaks of all kinds I guess)


I was under the impression that at 50% or 80% utilization, new containers are created


so unless you suddenly double # of requests, you're hitting a warm container


yes sure but still once in a while you’ll have a laggy one (because of cycling on/off)


it all depends on if you are looking at average or worst case


I'm looking at probabilities, and I suspect you can prove something like: if you get more than 100,000 requests / second, than the expected # of requests that hits a cold JVM is < 10


these numbers are sorta-made up, I haven't done the actual calculation


it just seems that under normal circumstance, you're going ot hit a warm JVM, and the only time you hit a cold JVM is if there's a sudden spike where your spike in traffic outstripts AWS's ability to fire up new JVMs


P = coldstart-time (in seconds)/(Request-rate (per second) / N of containers * container-max-lifetime (in seconds))


(Simplified) P = coldstart-time (in seconds) N of containers/(Request-rate (per second) container-max-lifetime (in seconds))


@cgrand: let's assume: 1. cold startup time is 5 seconds 2. AWS keeps lambda utilization at 80% (fires up new containers when over 80% utilization) then, I would make the argument that if your traffic is growing at < 25% every 5 seconds, everything hits warm start


I think you're assuming that Lambda is always at full utilization, so whenever traffic grows, some of them hits cold starts.


I'm assuming that AWS tries to keep lambda at most 80% utilized; (with this 20%) buffer -- and as long as traffic is growing at < 25% / 5 seconds, you'll never hit a cold start -- because the requests hit the warm 20% buffer while new JVMs are being spun up


1. @cgrand is right, I was wrong 2. what I described is NOT lambda serverless; what I described is "ELB autoscaling" -- so my math is off -- because my mental model of how lambda scaling worked is off 3. @cgrand’s math is correcter


[ this is after talking to aws business support ]


Depending on the problem you're solving 🙂


I spoke to someone at an AWS meetup the other week about that actually


As long as you're running your Lambdas quite frequently (like > once per 5 mins) you only pay the slow startup time on first run


They do something, that I admit I don't really understand fully yet as I've not done much with them, so that they're not reloading the code every time you call it unless the Lambda has gone 'cold'. So you don't pay the slow-start


The downside of this, apparently, is that Lambdas have 500MB of scratch disk to store temp files in and such, and this is not cleaned between runs if the Lambda has been kept hot for your script, only when it goes cold and someone else might get it


So the person I was speaking to was trying to do some secure stuff and finding files they didn't expect that were a) tainting their results, and b) possibly allowing information leak between runs


the 'tree-shaking' algorithm is amazing


Let’s say it’s promising foundations


Let’s say it’s promising foundations


@cgrand portkey SLA is pretty good too: Oct 15: I post error using portkey in boot repl (portkey originally designed to be used with lein) Oct 16: issue fixed


@qqq re “awd re-invent = worst nightmare” did you mean this conference?


@borkdude: not sure; one day, 1/3rd of news.yc frontpage was "AWS ReInvent releases feature XYZ", and each of those feature killed off some class of startups


If people versed in meta interpreters are interested, I could explain how to have portkey for cljs


@qqq so “AWS re-invent” as some generic name for a startup who is providing some service on top of AWS?


@borkdude: sorry, I mis read your question, you're right, by re-invent, I meant


in particular, the conference where AWS announces a bunch of new AWS features


I know I'm really late to the conversation, but I'm using ClojureScript for AWS Lambda running on Nodejs, not jvm


Let X be any set. Let Y be a set of subsets of X where union over elements of Y covers all of X (possibly with duplicates). Is there a formal math term for this For ecample, if x = {1, 2 3 4} we can have Y = {{1 2 3}, {2, 4}}


>Covers are commonly used in the context of topology. If the set X is a topological space, then a cover C of X is a collection of subsets Uα of X whose union is the whole space X.


maybe subcover is more appropriate as I think a cover can include items that aren’t in X


@smith.adriane: thanks! after reading over the defs, 'cover' is what I' mlooking for, as I actually want to allow elems of Y to contain elems not in X


awesome! glad it helped. the more I read the page, the more confused I got