Fork me on GitHub

@mruzekw @oliver.marks I haven't tried cloud build within the context of skaffold but been doing some Cloud Run deploys with jib/CloudRun and with jib/Kustomize (via Flux). We created deps tool to automate the jib build and push to GCR. It's documented here: This could be useful if you're already using deps.edn because you can add a jib.edn like:

{:main atomist.web.handler
 :target-image {:type :registry
                :image-name ""
                :tagger {:fn jibbit.tagger/tag}
                :authorizer {:fn jibbit.gcloud/authorizer}}}
It'll use your deps.edn to create a layer with all the dependencies, and another layer with your code (possibly aot-ed). The jibbit.gcloud/authorizer assumes that there's a gcloud available and that you're authenticated ( At that point, clj -Tjib build will create your image and push it to gcr. gcloud run deploy... would kick of at this point for us and we start testing the new revision. I'm interested in how we might hook into skaffold here too.


Does this use the expand jar method like vanilla Jib for quick (re)builds?


@mruzekw all of the jar dependencies are unpacked to a separate layer so as long as those don't change, only the layer with the clojure sources will be updated. In practice, the build/push cycles with GCR are fast because you're only really changing one small layer.


I'm not sure if that's exactly what you meant though.


I think that answers my question. I just remember a Jib dev mentioning they unpack the jar to create separate layers. Wondered if this followed the same strategy


Over on the Kubernetes #skaffold channel > Jib doesn't package up the build jar file and instead includes the .class files directly as it's faster.


Also the most direct way I've found to integrate a clojure project with Jib is to use Maven and hook into these build steps > Skaffold will call mvn prepare-package jib:{build,dockerBuild} , or mvn package jib:{build,dockerBuild} for a multi-project setup. prepare-package and package will first run the compile phase, so it should all work


There are also build / file sync hooks in skaffold.yml


But it would be nice if there were an API to build community builders I know there's this but not sure if it satisfies that


Ya, we're using jib core but coming up with a packaging standard that I think is more suitable for clojure. So there are two layers: One slow changing one with the dependency jars, and a second one with code.


If you're using deps.edn already, this probably creates a good layering. I wonder if the "Custom Script" configuration here ( would be sufficient to plugin clj -Tjib - the rest of the phases look like they wouldn't need to be altered.


Ah good point.


I've been looking more into a Jibbit + Skaffold integration. Happy to work on this if your team isn't already working on it


@mruzekw Ya, it'd be great to see whether you can do a skaffold integration. We've been looking at GCP integrations (both GKE and CloudRun) but we've been looking at the problem slightly differently. Instead of being push oriented (where we "push" the image at the end of something like a skaffold pipeline), we're using a pull-oriented model where a GKE cluster namespace, or a CloudRun service, defines a set of checks that have to be run on an Image, and then they "pull" candidate images only when all of the checks have passed. Checks are independently authored functions but they're things like • image must come from a trusted registry • image must have be scanned and be indexed by it's bill of materials • image can not have more vulnerabilities than the current deployed version • image must be scanned for secret leakage • image must be signed by a trusted a builder • etc etc As you can probably tell from the above, the project is about building out consistent security checks that teams can plug in to their delivery without having to integrate with different CD systems. If you're interested, I can give you more details. Skaffold definitely looks good though - I'm keen to help out with the integration


I’ve not used scaffold or jibbit; but there is also pack: Which works well with; at least on my fork which adds a few missing features (I have some PRs waiting to be merged upstream).


@U0143KP47M4 Sounds like a worth effort. Though it'd be nice if Jibbit could serve the "push" case as well. So far looking into it. Skaffold injects shell env vars for the shell script to use. A very simple solution would be to build a bash script that runs clj -Tjib ... with the appropriate env vars. But I was thinking it would be cleaner and more maintainable for users to allow env vars in jib.edn. This could be done by integrating into the config slurp process. Working on the bash proof-of-concept now.


Ya, I see what you mean. That makes sense. Your approach should work well with something like github actions or GCP cloud deploy too. Which of the skaffold environment variables do you need to pull into the config so far?


@U06HHF230 I hadn't seen this project before - looks good. Similar goals to jibbit I think.


@U0143KP47M4: yeah I’ve been using pack for a few years now; but looks very similar

👍 1

It depends on which features we'd like to support. Here's what I have in mind: • [ ] Integrate $IMAGE - Naming the image • [ ] Integrate $PUSH_IMAGE - Whether or not to push the image • [ ] Integrate $PLATFORMS - Which platform to build for • [ ] Create command to list paths for • [ ] An example with paths/ignore listed for file syncing (ideally just the command above would be sufficient for both watching and syncing, but it's unclear if that's the case) • [ ] Extra: Command to generate out of the box skaffold.yml


Peddling back some though, it doesn't look like dyn-edn supports if/else for dynamically configuring [:target-image :type]... So will need to think about the approach more


• the $IMAGE makes sense - if skaffold is driving the image naming (including the tag name) then I guess we'd want to allow skaffold to drive the "target" image configuration • I guess we'd interpret the $PUSH_IMAGE as the "type" of target image (if the type is registry, jib pushes the image automatically, but the type can also be a local docker daemon, or even just a tar file) • the $PLATFORMS is about creating a manifest list I guess • is that dependency watching path listing a way to integrate a filesystem watcher for automatically rebuilding local images?


Yep, the "command" config option is to list out the paths to watch (and ignore) for watching to rebuild and sync. But it seems we also need to list out the paths in the skaffold.yml file too for syncing (nice for dev to avoid rebuilds). That said, I'm thinking of asking the skaffold team if a "command" config option is/could be used for listing files to sync as well

👍 1

Dove into this. Ran into some issues, but I think the biggest blocker to a reasonable Clojure + Jib + Skaffold setup is I'll pick it up again after we have the ability to just enable file syncing with a custom builder.


So if you make a file change in your project (to a .clj file for example), then if you have sync enabled, skaffold will copy the updated .clj file into the running pod, and skip the rebuild of the image. Is that right? It was unclear to me how skaffold would know when to do a sync, and when to rebuild the image, push it, and update spec.


That's correct. The problem however is that you can't turn off just "watch-rebuild" with the "custom" builder without also disabling sync. So far as I'm aware.


Thus minimizing utility for Clojure dev


Ya, I guess I'm starting to see what you mean. I just would have thought that they'd always want either one or the other. In other words, if the change is only for files that support syncing, then just sync. If the change is for something else, then rebuild the image, and go through the image push, spec update workflow. So I would've thought your desired configuration here would almost be the default. Made me think I was missing something crucial. When you automatically file-sync your *.clj files, do you have a mode where the container runtime will detect the file update and re-load the namespace?


One could achieve the setup I'm talking about with the paths and ignore keys in the skaffold.yml, so we could technically generate a new skaffold.yml with those filled out just for file-sync. Or we could expect people to fill that out themselves and just document it. > When you automatically file-sync your *.clj files, do you have a mode where the container runtime will detect the file update and re-load the namespace? Yep! Example user.clj


Cool, that's pretty nice! So if you leave your repl and look at a port-forwarded kube pod, it'll stay in sync with what's currently on your filesystem I guess. That's awesome! My instinct is that having to put in the paths/ignore to enable file-sync isn't too bad because you'd have to document how to prep the watcher in their container too, right?


True. I was just looking at the ideal solution where Jibbit would provide a command for the custom > dependencies > command option. User's just need a command to output a JSON array of paths to watch. As I've said though, we can't selectively turn on sync without rebuild. Making this not ideal.


So yeah, I think at least documenting skaffold integration would be a good first step. Happy to contribute that.


I'll see whether I can get your scenario working on one of my projects. I don't really have an instinct whether we should try to use the deps.edn to initialize a first version of the skaffold.yaml file. By the way, I would assume that the file sync option only works for GKE (not for CloudRun where you won't have enough access to the pod?).


That's right. This would be for GKE with CloudCode. (They use Skaffold behind the scenes in the Cloud Run emulator. But I don't think they allow customized setups.)


I think an example and/or docs would be enough.


@mruzekw I was trying this out with one of my existing jib.edn files. My :target-image is currently

{:type :registry
                :image-name ""
                :user "nobody"
                :tagger {:fn jibbit.tagger/environment :args {:varname "IMAGE" }}
                :authorizer {:fn jibbit.gcloud/authorizer}}
where I added a new :tagger so that I could pull IMAGE out of the environment managed by skaffold. I've been letting custom clj -T:jib build do the initial push. I kind of wish that build.custom.dependencies.command would let me fill in the paths and ignore array directly.


Right, they've acknowledged my feature request but labeled it a p3, so I'm not expecting it to be done any time soon


But I like the idea of being able to configure env vars for different params


I was just building custom functions with Jibbit as a lib and pulling the via System/env

👍 1

I also wasn't really able to get the file sync feature working in my environment. I used

  - image: 
      buildCommand: clj -T:jib build
        - src/atomist/web/handler.clj
      - src: src/atomist/web/handler.clj
        dest: /src/atomist/web/handler.clj
just so I could try to make changes to src/atomist/web/handler.clj and see them. I see messages from skaffold saying that it's syncing to the existing pod (Syncing 1 files for but as far as I can tell, it didn't sync.


This configuration did skip the custom build step. It just didn't seem to do the sync I was expecting.


It's good they've acknowledge the feature request - that's progress anyways. I'm just wondering if there's one good skaffold.yaml configuration that could be documented to show file syncing along with your user namespace for re-loading the namespace. It would be pretty cool to see that that kind of workflow really working.


@mruzekw I had some pebcak above - there's a bug where I'm writing layers as root but then running as non-root, and that was interfering with the sync process. So the below is at least doing what I expect - syncing local .clj changes and only rebuilding the image if it's missing:

  - image: 
      buildCommand: clj -T:jib build
        - src/atomist/web/handler.clj
        # command: clj -X:skaffold-deps
      - src: src/atomist/web/handler.clj
        dest: /


Nice! Does it work on just dirs as well?


Ya, it seems to work fine for dirs. The tricky part was that there's no logging when the sync doesn't work. So I was getting tripped up by that root/non-root problem and read-only filesystems in the pod spec.


But we could probably create a good skaffold generator that maintains a build section using your deps.edn. Just need to see whether I can make sure that layers are written as the same user that will written into the container metadata. That should fix any file sync issues.


I'll swap my example out for a dirs-only one


> Just need to see whether I can make sure that layers are written as the same user that will written into the container metadata. That should fix any file sync issues. Is this doable with Jib itself somehow?


Ya, I think so. Will verify later today


Ya, updating the ownership of the files works. I merged some changes to main that I needed to get this working. I started a little wiki page here - maybe we can turn this into a little HOW-TO if we can make it reliable. I really like the idea of generating the skaffold.yaml.


Maybe you've already got something done there.


Awesome! I can share what I have. But I didnt get as far as generating skaffold.yml


Cool, I'll probably push a new release of jibbit just in case anyone is using tags to pick up new versions of the tool. But maybe we could collaborate and make a article for the #news-and-articles channel.


Sure, I would be happy to help on an article. I'm setting up a sample project now. Will send it over when it's up. Doesn't include any changes from the new version, so let me know how I can improve it.


Okay, I'm very close to having a fully working setup. But I'm having trouble forwarding both the web server port and the nrepl port. How does Jib/Jibbit know which ports to expose? Could/should we make this configurable? Figured it out, was a k8s config thing


Ya, k8s doesn't really enforce the PORT metadata in a container so we didn't bother adding it.


So I think I'm pretty close to a working set up like I said. Though I'd like the entrypoint (for skaffold dev) to be (ns user) . But I can't seem to get it in the classpath. The user name space as a (defn -main ...) to kick off the web server and nrepl server. Could you give my setup a look when you have a moment? I'm stumped.


Does the :skaffold-dev alias also need an :exec-args entry of :main user ?


Since you put the :dev alias in, I guess it could read that out of the :main-opts but I'm not sure it does


Oh, you're right, it does 🙂. Hmmm, let me grab those and try to run them


Thanks! Sure, I can push my whole repo up too


Noticed a typo, pushed


When I run just clj -M:dev everything loads up fine and I can connect to the nrepl


I think there must be something wrong with :extra-paths processing and we're not copying dev/user.clj into the image. Because the entrypoint is right. But the application layer doesn't have a /dev/user.clj in it.


Yep, should be


(ns user) does load if I put it in src/user.clj


Ya, so it's a problem with the :extra-paths processing. We should be copying /dev in too.


Weird, I thought the basis creation in deps.edn took care of that but something doesn't seem to work there. Oh, the app is running from / and /dev already exists in the base image and is not writable by "nobody". Maybe will install the app in /home/app instead. That'll be safer if the WORKDIR is something that gets created by jib.


Good to catch this one 🙂


Actually, I think you can fix by putting :working-dir "/home/app" into your current :skaffold-dev alias. / is probably just not a good default for WORKING_DIR here.


Maybe /home/app or /home/cljapp - what do you think?


Sorry! Was getting a coffee


I think /home/app is good :thumbsup:


I'll try the :working-dir on my end


Nice! That works :thumbsup:


awesome, I'll probably change the default to /home/app in the next version. Makes more sense than /


Got file sync and auto reloading working!


Cool, sounds good. I'll push up my changes


And use :working-dir for now


woot! Should be a very cool flow - I'm not familiar with integrant but I guess that does some of the re-load work for your app


It's like Component or Mount where you can define components of your app and their dependencies. Then you can start/stop all at once or individually.


If you look in src/myusername/myapp/system.clj, you'll see the whole config there. The reload magic is in ig/suspend-key! and ig/resume-key definitions


And it's all pulled straight from the Integrant README


That's pushed!


cool, thanks, I'll take a look. I'm familiar with mount so this makes sense to me now.


For sure. Let me know if you have any trouble with it, and if there's anything I can improve.


Thanks for looking into the :extra-paths issue 🙇


Cool! Will take a look


@U0143KP47M4 thanks for the info and thread in general been away, hopefully i will get some time to try some of this out

👍 1
Oliver Marks20:03:40

so not overly familiar with clj -T tools I can run the command via the command line, is there a way to pull the jib dependancy and run the build as a deps.edn aliases ? would work quite nicely in a ci setup then



:jib {:deps {io.github.atomisthq/jibbit {:git/tag "v0.1.13" :git/sha "4547f2d"}}
        :ns-default jibbit.core
        :ns-aliases {jib jibbit.core}}
clj -T:jib build


That or do a global install

clj -Ttools install io.github.atomisthq/jibbit '{:git/tag "v0.1.13"}' :as jib


Ya, the -T:jib works great for a self-contained deps.edn. For CI builds, it's actually possible to write a ~/.clojure/tools/jib.edn into your CI environment and the clj -Tjib syntax will work again. So if you really need to use the clj -T syntax on deps.edn files that don't have this alias, you can "prep" the environment. I'm not sure how practical this is (or even if it's a good idea). Part of me thinks just keeping the alias in the deps.edn is a better approach because the project can control the version more directly. I just point out the ~/.clojure/tools/jib.edn model because clojure Tools do support injecting tools into a CI environment this way.


I wonder where the best practices for deploying -T tools will end up going here.


My guess is either would be fine assuming you can stage the build env. Docker container and what not


@mruzekw I've been looking over your starter project here - it's a nice simple startup:

minikube start
skaffold dev --auto-build=false --no-prune=false --cache-artifacts=false
It's also an interesting starting point for non-minikube test clusters too. I tried replacing one pod in my test cluster with --no-prune=true - at the end, I had to roll back the update to that one pod manually, but it's interesting that this works with any cluster in your local kubectl context.


I was unsure how the :build alias is being used. I don't think it's part of the the current skaffold dev workflow, right? In general, uberjar-style packaging have a few problems that are hard to anticipate - I've been trying to make sure that we don't have to package uberjars in the image, and that lib layers can be scanned/patched independently.


The other thing I kind of wondered about is the difference between using skaffold.clj / clj -X:skaffold or jib.edn/ clj -T:jib , but I think I'm starting to answer my own question. You're trying to pull out both the test and the build into one build.clj program. So build.clj serves as kind a way to encapsulate the logic for both how you run your unit tests and how you build your docker image. Is that right? Lines could be extracted into a jib.edn but it's nice to have a clj -T:build entrypoint for both your testing and your docker building. It would probably have been more clear for me if skaffold-dev had been named container-build or jib-build or something. At first, I thought this was somehow an entrypoint for skaffold itself.


Thanks for the feedback! > In general, uberjar-style packaging have a few problems that are hard to anticipate The uberjar build in ci is left over from the template I used to start the starter. I can just remove that now to avoid confusion. > So build.clj serves as kind a way to encapsulate the logic for both how you run your unit tests and how you build your docker image. That's right. Outside of skaffold dev itself, build.clj is the central entrypoint to all dev tasks. I agree skaffold-dev is a poor name. I was thinking it was just going to be for skaffold, but it could easily just be a generic "build container" command. The thinking behind -T:build <task> is to provide a simple interface for beginners to start using the template. I don't want users to know or think about composing the right aliases on the command line or knowing which -M/A/T/X flag to run them with. This is inspired by my regular npm/package.json workflow. npm start/test/etc. > extracted into a jib.edn Yes! I will do that. In addition I will also see if I can remove build/skaffold.clj and just see if I can narrow it down to just build.clj.


Cool, and I know what you mean about trying to simplify down to -T:build <task> . The clj -X, -T, -M, -A options are finally starting to sink in for me, but that's quite recent. The Xecute, Tool, Main mnemonic has helped. For -A, I use AlwaysBeRepling, and the -S ones are just SomethingElse.


I think I'm going to make a "jibbit" starter as well. For those who just want to use jibbit


Hmm, so I'm trying to extract the jibbit config into jib.edn. Though I'm finding I can't use a string or symbol for :main :main "user" yields "Could not locate \"user\"__init.class, \"user\".clj or \"user\".cljc on classpath. :main 'user yields Could not locate 'user__init.class, 'user.clj or 'user.cljc on classpath." I would expect at least the second to work given:

user=> (pr-str 'user)


Granted I'm slurping the jib.edn myself to I can modify it with any env vars present.

(defn build-container [_]
  (let [image-name (System/getenv "IMAGE")
        push-image? (boolean (Boolean/valueOf "PUSH_IMAGE"))
        ;; TODO Support platforms
        ;; For now, use base image matching your host CPU arch (amd64/arm64)
        ;; platforms (get-env :string "PLATFORMS")
        edn-file (io/file "." "jib.edn")
        config (edn/read-string (slurp edn-file))
        config-with-env (-> config
                            (cond-> image-name (assoc-in [:target-image :image-name] image-name))
                            (cond-> (not (nil? push-image?)) (assoc-in [:target-image :type]
                                                                       (if push-image? :registry :docker))))
        _ (pprint config-with-env)]
    (jibbit/build {:config config-with-env})))


I do really like your build.clj/skaffold.yaml/jib.edn combo. I find that quite intuitive. I also like that there's a unit-test runner plugged in to the flow right from the beginning. From that starting point, you can address a lot of next steps, or how-tos • add a unit test • push to a different registry • run on a different cluster • add a service spec • deploy a stateful set


I added a way to use the IMAGE env

:target-image {:tagger {:fn jibbit.tagger/environment :args {:varname "IMAGE"}}}


The good thing about what you're doing above is that you can always do it. I guess that's another advantage of always having a build.clj program there to help you adapt the tools to your environment.


Right! It's just data! 🎉

💯 1
Oliver Marks19:03:19

@mruzekw The deps aliases seems to work, how ever since setting up gcr artifact registry when I build I get this.

Execution error (FileNotFoundException) at$add_registry_credentials$reify__362/retrieve (build.clj:21).
Could not locate 'jibbit/gcloud__init.class, 'jibbit/gcloud.clj or 'jibbit/gcloud.cljc on classpath.
Which could be auth related but it sounds more like a missing file, looking in git it looks like its there in the repo so not sure why this may be ?


@U02DXJUS5JA Could you share your jibbit config? It's worth noting I haven't touched any registry push workflows yet in the starter

Oliver Marks20:03:06

{:main ""
 :base-image {:image-name ""
              :type :registry}
 :target-image {:image-name ""
                :type :registry
                :authorizer {:fn 'jibbit.gcloud/authorizer}}}


Is this in the build.clj or a jib.edn?

Oliver Marks20:03:45

no build.clj but i do have a jib.edn the above is in the jib.edn

Oliver Marks20:03:22

It only errors if i have the target-image set

Oliver Marks20:03:47

so thought I would remove the authorizer part and now it errors with Permission \"artifactregistry.repositories.downloadArtifacts\" which I would have expected to be a push not a pull but perhaps its permissions some wherre


So I see two potential issues... 1. 'jibbit.gcloud/authorizer is literally being translated to a string filepath as 'jibbit.gcloud/authorizer.clj when we really just want jibbit.gcloud/authorizer.clj , @U0143KP47M4 This is the same or similar problem I'm running into with my jib.edn. For some reason 'random-symbol is read differently between EDN and Clojure code 2. You'll need to make sure you're using an authorizer that works for the registry you're trying to push to. The Jibbit readme has examples for GCR, ECR, and DockerHub

Oliver Marks20:03:38

I just tryed building and push a dummy image to the registry to make sure I have permission locally, that seems to work using docker push I am using Googles artifact registry which is there newer services for containers which also does libraries, but it sounds like its the string issue I am likely hitting


I'll try pushing a GAR issue as well - I assume it uses the same oauth2accesstoken from your logged in gcloud session. That code is really just gcloud auth print-access-token


Okay, then I suspect it's more issue 1, because I would expect the authorizer to work with both GAR and GCR

Oliver Marks20:03:44

I just removed the ' so I have :authorizer {:fn jibbit.gcloud/authorizer} which I noticed the readme does not have, and now its sat doing something

Oliver Marks20:03:45

success I can see the image uploaded next step CI thanks for the help 🙂


Awesome! For sure


Oh geez - it's a bug in the README. It's just supposed to be the symbol in the edn file - not quoted like that. OOPs


Just updated that but need to put some more checking around those - thanks @U02DXJUS5JA!

Oliver Marks06:03:25

cheers thanks for updating 🙂

Oliver Marks20:03:16

So I am hitting an issue, with the build which I believe is related to using a polylith layout

 "Execution error (NoSuchFileException) at sun.nio.fs.UnixException/translateToIOException (\n./src\n",
I am assuming this is because src is not in the same folder as deps.edn, with polylith you use local relative imports to load components and bases, so the src folders are spread out and you lnclude them with {:local/root "../../comnenets/my-component"} in your project deps files is this likely to be an issue / can it be resolved or am i likely to be hitting something else ?


Hey @U02DXJUS5JA - the polylith layout looks really interesting. So, what we do so far is • compute the basis using local deps.edn, and that also sets the project-root too • from the computed basis, filter out the entries that have a :path-key • copy over all of the paths into the application layer of the image


It sounds like the :local/root might be causing an issue because in general, the basis must know all the paths, otherwise other things would be broken.


Is your project public by any chance? Don't know if my terminology is right here, but each "basis" in a polylith layout should be able to build an accurate image using jib.

Oliver Marks05:03:08

no its not public unfortunately, I did try building from the base and this seemed to give better results but when I examined the image it looked like all the components where missing, is it possible to specify the paths manually in jib ass a way to test ?


We could definitely add a way to put the paths in manually. I feel like there's probably a way to parse the basis so it does the right thing though. In a polylith project, I guess a lot of the libraries are not packaged in jar files. There is an :aot option that we can add to clj -T:jib :aot true ( and it will try to compile all of the sources in the bases, but I've also not tried that on a polylith layout. I'm just looking at what a basis filled with libraries using :local/root will look like.


You can generate a classpath from a basis with a local basis.clj file with:

(ns basis
  (:require [ :as b]
            [clojure.pprint :refer [pprint]]))

(defn basis [_]
   (:classpath (b/create-basis {:project "deps.edn" :aliases []}))))
And if you add an alias of
:basis {:deps {io.github.clojure/ {:git/tag "v0.7.5" :git/sha "2526f58"}}
        :ns-default basis}
then run clj -T:basis basis and I think you'll see a lot of src directories that don't have path-keys and aren't being extracted. If we make a change to that, it may just pull in all of the right sources.


Ya, that misses all of the libs with :local/root specs. So things like:

 {:lib-name poly/rest-api}
don't get copied in because they look like libraries but are actually more like paths - I think we can extend the jib packaging to support htis

Oliver Marks08:03:03

okay sounds great, I thought that may have been the issue local/root i have not really seen used outside polylith


Hey @U02DXJUS5JA - I worked on polylith support in a branch yesterday. I've been using as my example. So I added a :jib alias explicitly to projects/realworld-backend/deps.edn

:jib     {:deps {io.github.atomisthq/jibbit {:git/url ""
                                             :sha "41d71f062acabc4e8469adb94e0ccfb51ab19541"}}
          :ns-default jibbit.core}
and then I added this jib.edn
{:aliases [:ring]
 :base-image {:image-name "openjdk:11-slim-buster"
              :type :registry}
 :target-image {:type :docker
                :image-name "polylith"}}
I think this is a suitable example. I also added clj -T:jib layers function (in addition to clj -T:jib build ) to just dump a summary of how the layers will be populated and the entrypoint created.


Just seeing your issue! will continue this in the issue thread: