This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-12-01
Channels
- # adventofcode (11)
- # aws (8)
- # beginners (70)
- # boot (2)
- # cider (9)
- # cljs-dev (29)
- # cljsrn (2)
- # clojure (67)
- # clojure-android (2)
- # clojure-dusseldorf (5)
- # clojure-greece (12)
- # clojure-italy (4)
- # clojure-nl (3)
- # clojure-poland (3)
- # clojure-russia (5)
- # clojure-spec (80)
- # clojure-uk (9)
- # clojurescript (73)
- # core-async (17)
- # cursive (1)
- # data-science (5)
- # datomic (29)
- # emacs (5)
- # fulcro (257)
- # graphql (2)
- # hoplon (2)
- # jobs (2)
- # klipse (3)
- # leiningen (9)
- # lumo (4)
- # nyc (1)
- # off-topic (48)
- # om (7)
- # other-languages (11)
- # pedestal (4)
- # re-frame (18)
- # remote-jobs (1)
- # rum (10)
- # shadow-cljs (5)
- # spacemacs (20)
- # sql (5)
- # test-check (44)
- # unrepl (8)
- # yada (9)
@qqq Perhaps Amazon has already done this, and you didn't notice? 🙂
@andy.fingerhut: it's entirely possible AWS has already done this; my point was: going AWS/Serverless changes mental model so much that even if they made these drastic changes, I wouldn't know (or really care, besides "ohh, nice, compute prices dropped again")
@dominicm What would be difficult about it? Lambdas can run JVM bytecode, becuase Java was one of the first languages they supported. What language you write the original code in doesn't really matter. If it compiles to JVM bytecode you can run it in a Lambda
If people versed in meta interpreters are interested, I could explain how to have portkey for cljs
Is this a real problem? In practice, this seems to only matter when you're going from 0 jvm containers to 1 jvm containers ;; all other times, you're hitting an already warmed up jvm
yes sure but still once in a while you’ll have a laggy one (because of cycling on/off)
I'm looking at probabilities, and I suspect you can prove something like: if you get more than 100,000 requests / second, than the expected # of requests that hits a cold JVM is < 10
it just seems that under normal circumstance, you're going ot hit a warm JVM, and the only time you hit a cold JVM is if there's a sudden spike where your spike in traffic outstripts AWS's ability to fire up new JVMs
P = coldstart-time (in seconds)/(Request-rate (per second) / N of containers * container-max-lifetime (in seconds))
(Simplified) P = coldstart-time (in seconds) N of containers/(Request-rate (per second) container-max-lifetime (in seconds))
@cgrand: let's assume: 1. cold startup time is 5 seconds 2. AWS keeps lambda utilization at 80% (fires up new containers when over 80% utilization) then, I would make the argument that if your traffic is growing at < 25% every 5 seconds, everything hits warm start
I think you're assuming that Lambda is always at full utilization, so whenever traffic grows, some of them hits cold starts.
I'm assuming that AWS tries to keep lambda at most 80% utilized; (with this 20%) buffer -- and as long as traffic is growing at < 25% / 5 seconds, you'll never hit a cold start -- because the requests hit the warm 20% buffer while new JVMs are being spun up
1. @cgrand is right, I was wrong 2. what I described is NOT lambda serverless; what I described is "ELB autoscaling" -- so my math is off -- because my mental model of how lambda scaling worked is off 3. @cgrand’s math is correcter
As long as you're running your Lambdas quite frequently (like > once per 5 mins) you only pay the slow startup time on first run
They do something, that I admit I don't really understand fully yet as I've not done much with them, so that they're not reloading the code every time you call it unless the Lambda has gone 'cold'. So you don't pay the slow-start
The downside of this, apparently, is that Lambdas have 500MB of scratch disk to store temp files in and such, and this is not cleaned between runs if the Lambda has been kept hot for your script, only when it goes cold and someone else might get it
So the person I was speaking to was trying to do some secure stuff and finding files they didn't expect that were a) tainting their results, and b) possibly allowing information leak between runs
@dominicm: https://github.com/portkey-cloud/portkey / join us over at #portkey
Let’s say it’s promising foundations
@cgrand portkey SLA is pretty good too: https://github.com/portkey-cloud/portkey/issues/36 Oct 15: I post error using portkey in boot repl (portkey originally designed to be used with lein) Oct 16: issue fixed
@qqq re “awd re-invent = worst nightmare” did you mean this conference? https://reinvent.awsevents.com/agenda/?trk=www.google.com
@borkdude: not sure; one day, 1/3rd of news.yc frontpage was "AWS ReInvent releases feature XYZ", and each of those feature killed off some class of startups
If people versed in meta interpreters are interested, I could explain how to have portkey for cljs
@qqq so “AWS re-invent” as some generic name for a startup who is providing some service on top of AWS?
@borkdude: sorry, I mis read your question, you're right, by re-invent, I meant https://reinvent.awsevents.com/
I know I'm really late to the conversation, but I'm using ClojureScript for AWS Lambda running on Nodejs, not jvm
Let X be any set. Let Y be a set of subsets of X where union over elements of Y covers all of X (possibly with duplicates). Is there a formal math term for this For ecample, if x = {1, 2 3 4} we can have Y = {{1 2 3}, {2, 4}}
https://en.wikipedia.org/wiki/Cover_(topology), isn’t “Cover” the term for that?
>Covers are commonly used in the context of topology. If the set X is a topological space, then a cover C of X is a collection of subsets Uα of X whose union is the whole space X.
maybe subcover is more appropriate as I think a cover can include items that aren’t in X
@smith.adriane: thanks! after reading over the defs, 'cover' is what I' mlooking for, as I actually want to allow elems of Y to contain elems not in X
awesome! glad it helped. the more I read the page, the more confused I got