This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # babashka (68)
- # beginners (22)
- # calva (8)
- # cider (10)
- # cljs-dev (31)
- # clojure (35)
- # clojure-europe (6)
- # clojure-norway (17)
- # clojurescript (5)
- # conjure (10)
- # data-science (8)
- # datascript (10)
- # emacs (3)
- # fulcro (20)
- # humbleui (3)
- # london-clojurians (1)
- # membrane (9)
- # nbb (34)
- # off-topic (16)
- # pathom (15)
- # releases (1)
- # shadow-cljs (15)
- # sql (11)
bb install to make it easier to install scripts, jars and
bb.edn tasks from remote locations
@borkdude is it possible to inhibit the download of deps and pods on any bb command used that are not required for a specific task to run?
Preferably via bb.edn setting.
For deps you can do
-cp '' to set the classpath. For pods, I don't there is such a thing yet
Let me explain my use case. I would really like to get rid of custom scripts that I'm using in HL to download pods & deps to the specific directory according to architecture. Currently I'm using docker for this to FORCE this behavior. I would need exactly two things from BB to make it happen: 1. Allow to set download directory for deps and pods. But in bb.edn rather than environment variable. 2. Support specifying architecture in bb.edn, so that bb can download both the pods for the architecture on which the script is invoked and the pods for the one that is specified in bb.edn. Workarounds I'm considering: Write a custom script that basically is a copy of BB deps & pods resolve, but doesn't relies on docker.
Would doing this in bb.edn be a good idea though? I don't think you would like to have this behavior for local development for example. This is why environment variables exist
Hmm, I think you might be right. Let's user develop the BB application how they want and handle pods download in some kind of pack command.
What about the deps then?
Nah, I don't want the uberjar. How then I could modify the sources of the function on the Lambda dashboard?
Hmm. You're right.
I already force my users to pack deps in separate layer, so it's not a big deal! Thank you so much @borkdude
Hmm, so it seems the only thing I lack now is env variable for setting the architecture of pods to download. What do you think about that?
Why would you download pods for a different architecture than the machine you're running in?
This is very good question. Imagine someone is developing lambda functions on Mac or Windows with Babashka and pods. Since we want to minimize the lambda cold start it's preferable to have these pods packed into a layer to omit the need of downloading the artifacts during cold start. Now since pod is a native program we have to pack the variant that is compiled for specific lambda architecture (either Linux arch64 or Linux arm64).
I think we can leave this as it is now, but the only drawback of this would be that people who want to use Babashka on AWS Lambda and have a different OS than Linux have to either use HL or invest their time to reinvent what HL already ships with.
I don't understand the problem. All major OSes run Docker and that runs linux? And if you would download pods for a different OS/arch, what would you do locally with it?
I would pack it up and test locally with sam local or localstack.
It does indeed.
My assumption was you would like to have minimal changes upstream that would make BB work like a charm with AWS Lambda with much less effort that is now.
If you download pods for a different arch, you can't run them locally. So I don't know what problem it solves
And if you produce a layer, this runs in docker, in the same arch you're targeting. So... I'm confused as to why you would like to have this option.
Introducing this option could even lead to more confusion as you have binaries that you can't run
You can run them locally by AWS SAM that spins the emulated AWS Lambda environment in docker. Basically this option would remove that extra step for checking your lambda, since it requires the user to run Docker just to download the pods with Linux arch and then later pack them into a layer.
So if you execute locally with AWS SAM, say, on intel macOS, but your lambda is running ARM. How does that work?
They are using Qemu to make it happen. Very similiar to what I’m doing to build ARM docker images on Github CI that is on Intel. https://github.com/FieryCod/holy-lambda/blob/master/.github/workflows/ci.yml L91
qemu is terribly slow. if you are using AWS SAM on intel, you could just use the intel pods to test locally too right?
But still for production you would have to either use qemu or find a CI that has ARM option to essentialy just download pods.
Github Actions sucks btw, it's not a real CI imo. It was bolted on to be competitive. It's convenient for light weight stuff like JS. CircleCI and Cirrus are much better.
I thought we’re talking about old good TM deployment option that doesn't involves docker images, but zip files for layers and runtime.
I thought we were talking about docker images, but now we are not? Perhaps you can come up with a clear problem statement with a clear scenario because now I'm even more confused :)
™️ - trademark, I wanted to be funny, but since you asked now my subtle joke is cringe’y 🥶
Don't understand me wrong, I'm not angry or irritated, I just want to have the problem clear, before another option is bolted on :)
So the use case is: a user makes a zip-file based deployment from local files, without docker?
I guess you could still download the pods manually from Github releases. They are not hard to find
Yeah. I will probably replace my docker hacky solution with script that downloads pods from Github releases.
I imagine you're envisioning something like this, right?
BABASHKA_PODS_DIR=./deploy BABASHKA_PODS_ARCH=aarch64 bb prepare bb uberjar ./deploy/deps.jar
The "danger" with this is that local pods would be overwritten and this would result into errors. E.g. we now have this:
$ ls ~/.babashka/pods/repository/org.babashka/go-sqlite3/0.1.0/ manifest.edn metadata.cache pod-babashka-go-sqlite3
So for this to work reliably we should also encode the arch and os into this dir structure
With that in place, I think we could introduce the env var, but not without doing this first
Yes, It's true.
But with these changes you can then say „BB is fully AWS Lambda compatible and it's up to you what custom runtime you gonna use. It might be HL or any other you like more”.
Sure sir! I will make two issues: 1. Incorporating arch info into directory structure. 2. Env variable for downloading pods with different architecture.
Ok, let it be.
You want to make this new thing not essentialy just for AWS Lambda, but for potential future use cases that might come along.