Fork me on GitHub

aside from smaller upload, the approach that deployment is done from inside the repl, gives fast feedback time


@cgrand: clarification on tree shaking: it's more "precise" than adding/removing entire libraries right? as in it can load a library, and then make include/exclude decisions based on individual classes of the library ?


Yes it's more fine grained. Inclusion is decided for each var and each class.


is the following correct: 1. machine DEV = machine where we run a repl, this is the machine where we run "pk/mount!" from 2. when the API point is hit, AWS fires up a new machine (or recycles an existing jvm), loads the lambda function, and executes it; call this machines BAR0, BAR1, BAR2, ... 3. we do NOT have repl access to BAR0, BAR1, BAR2, ... they are transient machines that AWS fires up and kills as needed 4. machine DEV can either be local (my laptop) or remote (an EC2 large instance running on AWS) the video mentions there is some advantage to DEV being an remote EC2 instance (instead of my local machine) what exactly are those advantages ?


Only one (well two for a demo): bandwidth (so upload time of the lambda). The second one for the demo is to be less dependent on conference wifi (repl running in tmux on ec2).


And all of your four assertions are correct.


The #2 is a bit inexact: jvms are not recycled between users. Either you get a fresh jvm or a jvm that has already been used for your lambda (so it's already loaded and initialized). Even a "hot" jvm will be cycled after a while.