Fork me on GitHub

An idea: bb install to make it easier to install scripts, jars and bb.edn tasks from remote locations Inspired by deno install

Karol Wójcik09:07:53

@borkdude is it possible to inhibit the download of deps and pods on any bb command used that are not required for a specific task to run?

Karol Wójcik09:07:57

Preferably via bb.edn setting.


For deps you can do -cp '' to set the classpath. For pods, I don't there is such a thing yet

Karol Wójcik10:07:20

Let me explain my use case. I would really like to get rid of custom scripts that I'm using in HL to download pods & deps to the specific directory according to architecture. Currently I'm using docker for this to FORCE this behavior. I would need exactly two things from BB to make it happen: 1. Allow to set download directory for deps and pods. But in bb.edn rather than environment variable. 2. Support specifying architecture in bb.edn, so that bb can download both the pods for the architecture on which the script is invoked and the pods for the one that is specified in bb.edn. Workarounds I'm considering: Write a custom script that basically is a copy of BB deps & pods resolve, but doesn't relies on docker.


Would doing this in bb.edn be a good idea though? I don't think you would like to have this behavior for local development for example. This is why environment variables exist


I've already added this environment variable since you asked for this previously

Karol Wójcik10:07:29

Hmm, I think you might be right. Let's user develop the BB application how they want and handle pods download in some kind of pack command.

Karol Wójcik10:07:37

What about the deps then?


deps are usually best handled through an uberjar


and then invoke with bb uberjar.jar

Karol Wójcik10:07:25

Nah, I don't want the uberjar. How then I could modify the sources of the function on the Lambda dashboard?


you can create this with bb uberjar foo.jar


well, it's unlikely that you would modify sources of deps right?


you can do bb --cp src:deps.jar and still be able to edit the src directory

Karol Wójcik10:07:10

Hmm. You're right.

Karol Wójcik10:07:31

I already force my users to pack deps in separate layer, so it's not a big deal! Thank you so much @borkdude

👍 1
Karol Wójcik13:07:55

Hmm, so it seems the only thing I lack now is env variable for setting the architecture of pods to download. What do you think about that?


Why would you download pods for a different architecture than the machine you're running in?

Karol Wójcik13:07:06

This is very good question. Imagine someone is developing lambda functions on Mac or Windows with Babashka and pods. Since we want to minimize the lambda cold start it's preferable to have these pods packed into a layer to omit the need of downloading the artifacts during cold start. Now since pod is a native program we have to pack the variant that is compiled for specific lambda architecture (either Linux arch64 or Linux arm64).


True. And this is done using docker right? For which you can set the architecture

Karol Wójcik14:07:26

I think we can leave this as it is now, but the only drawback of this would be that people who want to use Babashka on AWS Lambda and have a different OS than Linux have to either use HL or invest their time to reinvent what HL already ships with.


I don't understand the problem. All major OSes run Docker and that runs linux? And if you would download pods for a different OS/arch, what would you do locally with it?


Building such things usually happens on CI or in a docker (or both)

Karol Wójcik14:07:01

I would pack it up and test locally with sam local or localstack.


And that doesn't run in docker?

Karol Wójcik14:07:20

It does indeed.

Karol Wójcik14:07:52

My assumption was you would like to have minimal changes upstream that would make BB work like a charm with AWS Lambda with much less effort that is now.


That would be nice, but so far I haven't heard a reason why something should change


If you download pods for a different arch, you can't run them locally. So I don't know what problem it solves


And if you produce a layer, this runs in docker, in the same arch you're targeting. So... I'm confused as to why you would like to have this option.


Introducing this option could even lead to more confusion as you have binaries that you can't run

Karol Wójcik14:07:28

You can run them locally by AWS SAM that spins the emulated AWS Lambda environment in docker. Basically this option would remove that extra step for checking your lambda, since it requires the user to run Docker just to download the pods with Linux arch and then later pack them into a layer.


So if you execute locally with AWS SAM, say, on intel macOS, but your lambda is running ARM. How does that work?

Karol Wójcik14:07:16

They are using Qemu to make it happen. Very similiar to what I’m doing to build ARM docker images on Github CI that is on Intel. L91


qemu is terribly slow. if you are using AWS SAM on intel, you could just use the intel pods to test locally too right?

Karol Wójcik14:07:02

But still for production you would have to either use qemu or find a CI that has ARM option to essentialy just download pods.


But if you're producing an aarch64 docker image, don't you need that anyway?


Github Actions sucks btw, it's not a real CI imo. It was bolted on to be competitive. It's convenient for light weight stuff like JS. CircleCI and Cirrus are much better.

❤️ 1

They both support arm

Karol Wójcik14:07:41

I thought we’re talking about old good TM deployment option that doesn't involves docker images, but zip files for layers and runtime.


Sorry, what is TM?


I thought we were talking about docker images, but now we are not? Perhaps you can come up with a clear problem statement with a clear scenario because now I'm even more confused :)

Karol Wójcik14:07:39

™️ - trademark, I wanted to be funny, but since you asked now my subtle joke is cringe’y 🥶


haha no worries


Don't understand me wrong, I'm not angry or irritated, I just want to have the problem clear, before another option is bolted on :)


So the use case is: a user makes a zip-file based deployment from local files, without docker?

👍 1

I guess you could still download the pods manually from Github releases. They are not hard to find

Karol Wójcik14:07:46

Yeah. I will probably replace my docker hacky solution with script that downloads pods from Github releases.


I imagine you're envisioning something like this, right?

BABASHKA_PODS_DIR=./deploy BABASHKA_PODS_ARCH=aarch64 bb prepare
bb uberjar ./deploy/deps.jar 


The "danger" with this is that local pods would be overwritten and this would result into errors. E.g. we now have this:

$ ls ~/.babashka/pods/repository/org.babashka/go-sqlite3/0.1.0/
manifest.edn            metadata.cache          pod-babashka-go-sqlite3


So for this to work reliably we should also encode the arch and os into this dir structure


With that in place, I think we could introduce the env var, but not without doing this first

Karol Wójcik14:07:20

Yes, It's true.


OK, PR welcome then :-D


and an issue

Karol Wójcik14:07:27

But with these changes you can then say „BB is fully AWS Lambda compatible and it's up to you what custom runtime you gonna use. It might be HL or any other you like more”.


Yeah, fair enough.

Karol Wójcik14:07:37

Sure sir! I will make two issues: 1. Incorporating arch info into directory structure. 2. Env variable for downloading pods with different architecture.


1. also os info


such that you can download linux pods on mac or windows

👍 1

same for 2 I guess

Karol Wójcik14:07:42

Ok, let it be.

Karol Wójcik14:07:29

You want to make this new thing not essentialy just for AWS Lambda, but for potential future use cases that might come along.