Fork me on GitHub

Is anyone using docker to build their jars? When you use a tool like, do you have a separate docker image for build (including leiningen and the source code) and a minimal image with jre and uberjar only to run in production?


We have used to docker to build our jars. We built the uberjar on a bastion box, then ran it in a jre docker image.


I see no problem with having a docker image for building, and a docker image for running though. Infact I think it's a good idea.


@yonatanel minimizes jdk mismatch, snapshot mismatch, local state, that's on each developer's machine. Don't want a "works for me" situation where it only deploys from Annie's laptop.


@dominicm Then I should ask, what do you mean by bastion box, because I had to look it up and it seems more security related


@yonatanel admittedly, we also use that box to deploy the massive jar, it's quicker to do it on the same network as where you're uploading to. We also have PCI requirements that mandate we control access in certain ways. The name is somewhat misleading to this particular line of conversation.


hmmmm, we don’t use docker to build our jars, but we do deploy our jars using docker. The jar is built by the build server and then creates the docker image and adds the jar to it.


I was just working on a project that built in sort of two steps (pretty much the same as what dominicm is describing), a docker where all the tests could run, and the result of running that container was an uberjar, and then that uberjar was bundled up in a more minimal docker for running


the testing docker image can make ci nice and simple


the tricky thing is most docker "best practices" are aimed at production app sort of things


your testing docker image should definitely be all in one and spin up any other server processes etc you need in the docker, not run as distinct containers and wired up using whatever your favorite tool for that is, which is the opposite of the advice given for production apps


@hiredman I think you can do that now with docker-compose which does use distinct containers but in a common environment.


Yeah, I would say do not use docker compose for your testing docker image


But for integration testing wouldn't I want to pull the latest dependency images, run them in a test setting and run against them?


Seems natural if everything is already dockerized


If you can just run the service in the same container where you run tests, what is the value of a more complicated setup?


If I write two services where one depends on another and both of them are docker-ready and run that way in production, I'll just run them that way in tests as well. Even if a service is not dockerized, it seems to me more natural to dockerize it first and then use it in tests, instead of creating a complicated docker image with everything. Didn't try any of the options though.