Fork me on GitHub

I'm building a jar inside a kubernetes pod (clojure:alpine image) which supposed to have the EC2 instance role permissions to my jars on S3. Still I get this: Could not transfer artifact aaa:bbb:jar:0.1.8 from/to private-releases (): Access key cannot be null. I'm using [s3-wagon-private "1.3.0"] plugin. Anyone has experience with this? I'm using kops and upgraded an existing working cluster to kubernetes 1.8.3. I see the instances kept all the original additional permissions I gave them but it's not respected anymore. I also tried with k8s 1.6.12 and have the same problem. I've even explicitly set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with export AWS_ACCESS_KEY_ID=... before running lein uberjar.


I made it work by using clojure:lein-2.7.1-alpine explicitly instead of the latest lein-2.8.1-alpine. If I use clojure:lein-2.8.1-alpine image I need to use [s3-wagon-private "1.3.1-alpha3"] for it to work.


aye, it is recommended to use 1.3.1-alpha3 at this point. @danielcompton maybe it's time to do a release 😉


I've published 1.3.1 now