Fork me on GitHub

Hi 🙂 I feel a bit bad for which I haven't worked on since August. Status: • The neil dep upgrade command seems to be working as expected • But the tests are failing, and I don't know exactly why. I had some trouble getting the same results locally and on CI. Part of the reason is probably that I don't do any JVM Clojure work day to day. So I haven't been able to use neil for anything useful lately. (though avoiding JVM startup makes me prefer neil to other alternatives, and I think it's a great example of good use cases for babashka). So - If anyone wants to take over the PR to get it merged, I would be happy with that, and happy to discuss if anything comes up. 🙂 Teodor


@cap10morgan I pushed a babashka branch with your pod PR in it. There are some failing tests. Could you guide me a little bit through this PR by writing up some details about the changes in one or both PRs?


Yeah, I need a linux/aarch64 clj-kondo to continue debugging the tests. Any chance of a release soon?


We just had a release so this might take a while. Maybe there are other pods you can test with?


well, I'm having some strange test failures w/ babashka's test suite even on master and the most recent release tag on my dev machine. so I wanted to run those tests, unmodified, in a docker container to establish a baseline of 100% passing tests before digging into what may be breaking some in my changes.


the test suite wants to use clj-kondo


maybe a linux/aarch64 build of the most recent release would be doable?


or maybe I can comment those tests out for now and then see if everything passes in CI once pushed


@cap10morgan I think it would be ok if we uploaded an aarch64 release manualy for the previous release


and then updated the existing manifest


I'll try now


This is more of a user-focused doc, but useful for understanding the PR anyway. I'm happy to provide some implementation notes too where helpful.


Please do :)


well, the basic high level thing is that when a namespace gets required that bb isn't already aware of (let's say, it adds ns/one/two/three/pod-manifest.edn, ns/one/two/pod-manifest.edn, and ns/one/pod-manifest.edn to the source-for-namespace search paths and then tries to see if it can load a resource from the classpath at any of those paths (basically the same way it would try to load a .clj, .cljc, or .bb file). if it can, it uses some new code to load a pod from the manifest, which is kind of inverted from the usual code path where you have a pod name first, then you get the manifest from the registry or the cache.


and, of course, those pod-manifest.edn files get onto the classpath by putting a pod lib in your :deps map


What if I wanted to expose clj-kondo.core in my library for clj via normal code but for bb via the pod, so people can use the same namespace? Then the manifest has to live in /clj_kondo/pod-manifest.edn - but what if there are mare libs that have the clj_kondo prefix? It feels a little bit brittle. What I expected instead (in previous discussion) is that we would have clj_kondo/core.pod.edn for example, so it becomes unambiguous. Yes, you would have to insert the manifest potentially n times, each time for every namespace, but to circumvent that, you could support a redirection in the .pod.edn to clj_kondo/clj_kondo.pod.edn or so which then contains the complete manifest.


(also commented this in PR)


That’s why I wanted an unlikely to be used by something else filename (in this case pod-manifest.edn instead of manifest.edn). I think it’s far simpler to just have one manifest for the whole pod, and highly unlikely to step on other stuff. You would put it in clj_kondo/core/pod-manifest.edn in this case (has to be at least two ns segments). This seems like a thing we already have to deal with all the time with JAR path flattening.


What I mean if that when I expose 2 namespaces in my pod, I'd have to move my manifest to clj_kondo but then this manifest potentially conflicts with other manifests


well, under the current implementation you can't move it to clj_kondo. it doesn't look in the top-level ns segment b/c they're often TLDs or pod or similar. and like you said, there's a lot of conflict potential then. but yeah, I see your point. for two-level namespace pods you're back to manifest-per-ns anyway. clj_kondo/pod-manifest.edn is probably OK on a classpath, but com/pod-manifest.edn or pod/pod-manifest.edn obviously isn't. :thinking_face:


yeah... I'm coming around to path/to/my/ns.pod.edn or similar. there's not enough consistency in how namespaces are segmented to be able to reliably start with one and turn it into an unambiguous manifest path for the whole pod.


The ns.pod.edn could be one level of indirection though: it could have 1. The location of the full manifest on the classpath 2. A mapping of namespace -> pod namespace. E.g. clj-kondo.core -> pod.babashka.clj-kondo It might be good to first flesh out these details before changing a lot of code.


Or the full manifest could have that mapping


What we could also do is this:

which already has preference, will execute some pod related code, which will then ensure, that the namespace exists. E.g. just (load-pod ...)


And this will just load the pod for the same version as the library


I needed to sleep on all this (in part b/c I've been sick this week so my brain runs out of the thinky stuff earlier in the day than usual). So fundamentally, what we need to accomplish is going from a namespace (since that's all that require gives us) that babashka hasn't seen yet (since otherwise we have a better way to load it) to a pod binary we (may) need to download and load. I agree with you that full-ns -> full-ns.pod.edn or similar is a nice reliable approach. Clojure/bb's built-in code-loading either finds a one-to-one mapping from a ns to a file like this or throws an error. For very good reasons no doubt. However, there will be use cases where having a file in every namespace-path that you should be able to load in "pod mode" will be cumbersome (if the pod dev has to put those files there manually; perhaps we can find an approach where they don't). Like in the now-deprecated pod-babashka-aws pod which walked over all of the namespaces in the aws client library to map them into the pod's describe map. That one may be deprecated, but it's a use case I think we should still keep in mind.


That is why I proposed an indirection


or a .bb solution


I think it gets really cumbersome even if those files are just indirections


even if they're empty


but what about a build tool helper for these kinds of pods that can e.g. run your pod, get its describe map, and then generate all of the little indirection files for you? most pod authors won't need it. but for those that do, it would make a big difference.


What we could also do is pre-scan the classpath for pod stuff and cache that result, so we can skip that work the second time


a lib or similar


I would say: first make the mechanism, then make it easier, else it feels like premature work


hmm... interesting. and use the classpath itself (or a hash or whatever) as the cache key?


certainly, that would be the order we'd do it in


but I wouldn't want to do all the work if we didn't at least have an idea of how to make that use case feasible


clj-kondo also has a classpath scan mechanism: it scans the classpath for clj-kondo.exports directories and copies its contents to the .clj-kondo config directory


that's an interesting idea


but clj-kondo only does it on demand. lsp which uses kondo does it for you though


bb might need such a mechanism for data_readers.clj as well btw, right now it doesn't support that


(I left that out for performance reasons)


perhaps we could provide a --skip-pod-lib-scan or similar flag for those who know they don't have any and don't want to pay the perf hit?


bad idea, nobody will use that flag


it should work out of the box without any config


but caching might be ok


right, that's what I'm getting at. the feature works out of the box. if you notice the perf hit and don't like it and complain (which some will), then we point you to that flag.


it's an opt-out, not an opt-in


yeah, but if we cache then nobody will need that flag


not sure I agree. think ephemeral environments where the cache doesn't stick around


but that's a secondary concern and one we can address when/if it's an issue


I think such a scan should not take more than 10-50 ms or so so probably most people won't notice


but there is the concern that your classpath will have libs with pod manifests that you don't want or need to download


yeah, it would just be a combination of like ephemeral env & critical startup time (like aws lambda)


hmm... right. like hybrid libs w/ some nses that are native bb compatible and others that need a pod?


and we don't yet know which nses that project is going to load?


this was all much simpler when people had to declare their pods up front 😆


sorry to hear you were sick btw, hope you're doing better


yeah, mostly better today. thanks!


maybe that's why we started with it. and why we have load-pod :)


so I was thinking, we could have just a .bb file that calls load-pod that overrides the ns?


that would still rely on the centralized registry?


we could extend load-pod with support for loading a registry from the classpath?


eh manifest


yeah, my PR already adds load-pod-from-manifest


that is an elegant, generic mechanism


and if you want to "prepare" then you just load these namespaces up front...? but then they also execute...


prepare may need to copy uberscript's ns & require eval'ing for this anyway


but prepare now also supports downloading for other archs/oss?


yeah, with certain env vars set.


it turns on a "download only" mode when those don't match the system, so perhaps we can hook into that and only do the download and cache even though we started w/ eval'ing a require


only for prepare, of course


well, actually, for prepare, I guess it already doesn't handle pods that are explicitly loaded w/ load-pod but not declared in bb.edn :pods. and eval'ing ns and require forms wouldn't help there if we use .bb files that call load-pod or load-from-manifest. so perhaps what it would really need to do is just find those forms too (i.e. load-pod*) and download and cache the pods they refer to. they wouldn't need to "eval" them per se, but just pick out the relevant info. hmm... perhaps treating code as data in a too-brittle way then...


although... if it isn't already, we could expose the download-only? opt to those fns (it's there in the call chain pretty soon anyway if not already directly available) and just force it to true in this context and eval away