Fork me on GitHub

I work with a greybeard who has a hate hard-on for Docker/containers/crgroups. He always emails me in he middle of the night if some Docker thing (that he is forced to run) fails for some reason: "Look something didn't work! This is all Docker's fault. Back in my day we didn't need Docker..."


(a lot of the projects we need to integrate with have moved on to Docker in recent years, but he refuses to learn how it works)


I have a lot of sympathy for that -- but I accept Docker is a reality and I just need to suck it up and learn it / make it work!

✔️ 1

I find myself having a bit of troubles with wrapping my mind around Docker. Not hating on it because of that, of course. I kinda love Docker and am amazed by it.


I feel the same way as you @pez

fist-bump 1

I think it's generational more than anything. At my first job (a small, high-growth Danish startup) the tech lead was an early adopter of Docker, so it's always been a thing in my professional life. I can see why it would be threatening if you worked without it for 40 years.


now Kubernetes on the other hand....

😂 1

I’ve written about Docker here before, but I’ll reiterate. I see a very, very narrow use case for docker. Like if you’re a contractor working for multiple clients at the time, you can spin up client As stack in isolation from client Bs. But other than that, I just see a ton of added complexity which brings little if nothing at all to the table. Now I need to remember to start docker, now I can’t use psaux | grep whatever to see what’s running, want to connect to my psql I have to remember docker exec whatever some random name


Now, if someone could give me the trade offs for using docker, I’d be able to argue better or even shut up and accept docker, but I have yet to be presented with any such other than Docker is good.


For us, at work, we need multiple services available for running our dev stack locally, so being able to say docker-compose up to get all of that is convenient.

☝️ 3

(we need MySQL (Percona), Redis, and Elastic Search for this)


Sure, we have the same thing. But I don’t see what Docker brings to the table here. brew install the shit out it? It’s not like our docker stack changes every three days?


(and yes, there might be parts of the stack that isn’t brew install able, but I think my case still stands. Writing a babashka script that installs the stack on your machine is done in a jiffy.


(yes I’m using the Turing-complete argument)


To me, Docker is essentially a technology which takes something that is procedural, ad-hoc and mostly manual, and makes it declarative, structured and mostly automatic. Other benefits include modularity and immutability. The process isolation is a curse and a blessing. A lot of the hate is based on that (why can't I see my X the way I could before).

❤️ 1

@slipset but with Docker I can stand up any specific set of services I need for one task and then tear them down easily and stand up a completely different set of services


For example, we use Daux to generate HTML documentation from Markdown. It's PHP-based. But I don't need any of that installed to do this: I can run Docker with Daux to generate the docs and then it's all gone.


I don't need Percona, or Redis, or Elastic Search installed to run my work dev stack -- I can just issue one command (`docker-compose up`) to have all that running and then it can all go away when I'm doing other stuff.


Yeah, but what’s the benefit. I see that if I’m working as a contractor, but you’ve been at WOS for ages, right, so you’ll have this crap on your machine anyway?


And, I imagine services up whatever should work as well?


But I don't have any of "this crap" on my machine. I can use my laptop (for example) for any purpose without any of that running. When I decide to use it for work, I spin up those services, work, then shut them down. If I want Percona 8.x for a test, I can spin that up without installation (work requires Percona 5.7). I can run any version of any service I want with Docker without installing anything.


Fair tradeoff.


Frankly, for me, the ability to run a process via Docker to generate HTML from Markdown without having PHP installed is a giant win all on its own!


I thought the whole idea of Docker was dumb until I started using it for "throwaway" instances of stuff...


In my experience the biggest benefit of containerization (not necessarily docker) is that for the most part it solves the "but it works on my machine" problem. It offers a common way of setting up an app, so in most cases all you have to do as a user is configure some environment variables and beyond that it's plug and play.

☝️ 1

Which is great if you distribute apps, but I work at a SaaS company. We distribute our thing over https


I also work on a saas. we have an environment with several services spanning ~20 servers. I can't imagine managing these without containers. especially upgrades and rolling out many releases. IMO container image distribution is very well done. works like git.


Yeah, I see I’m loosing this battle. But I will never capitulate!


I can see the “it runs on my machine” if you’re programming rust/c/whatever, but clojure peeps are generally running code on a Java Virtual Machine. So we’re already virtualized.


And we deploy our stuff as a uberjar, which is a thing that contains all our deps.


It’s not like programming in C where you have to have the right version of libwhatever under /usr/lib/ . There I deffo see the point of containerization.


Let me just go on a tangent here. One of the things I’ve realized/been able to verbalize over the last couple of weeks is the luxury of not having problems. So I can say, well, yes, but that’s not a problem I have, so I don’t need the complexity that solution brings on. Examples. In Java, we have a garbage collector, so I don’t have to do stuff in my code to handle memory. In Clojure, we have immutable collections, so there’s a bunch of locking stuff I don’t have to do. At work, we don’t do micro-services. So there’s a ton of orchestration and monitoring tools I don’t have to care about. So basically a ton of problems I don’t have. And, running most of my stuff on the JVM, Works Only On My Machine(TM) is not a problem I have, so I don’t have to put complexity in place to solve it.


@slipset I think if your setup is basically just “I need to run java -jar” then Docker brings little except for—probably unnecessary—process isolation. I have a hobby project which is just that, so it never crossed my mind to use Docker. Maybe I will if the scope ever changes. However, when you need to run multiple services on one machine and you don’t want to deal with them interfering with each other—e.g. using different lib versions or trying to use the same ports—well then Docker solves that problem for you. It also makes it so that your system configuration is documented and makes it reusable by other people. Its component parts (i.e. images) can be further modularised (= made into smaller images), recombined and the code can be reused in other contexts. The way I see it, a Dockerfile is essentially just a shell script with a few restrictions imposed on it to make whatever image it produces immutable and the process itself idempotent. As you know from Clojure, immutability allows for a certain kind of reasoning about a component. Once you have multiple immutable components you can easily orchestrate them declaratively, e.g. in a docker-compose.yml file. Personally, I just use docker-compose since I also feel a certain container fatigue, but there’s no doubt that learning to use Docker and read a Dockerfile/docker-compose.yml makes it very easy to reason about how a system is put together. Usually when using setups created by other people, there is no need to look at the Dockerfile (the procedural mapping), you can just look at the docker-compose.yml file to get a sense of how the system is composed and how the components communicate. This is a major benefit IMO.


For testing next.jdbc, the ability to spin up a SQL Server instance on macOS via Docker is a big win.

🙌 2

Even testing against MySQL/MariaDB, I can do via Docker with zero-install. That's a big win.


Can’t argue against that.


I definitely agree that there are valid reasons to avoid containerisation—it’s always gonna be a trade-off. Now this guy at work… I think he’s a cool guy and he’s pretty damn smart (he studied particle physics AFAIK), but he is a classic greybeard (which is meant affectionately). I didn’t just call him that because he sends me hatemail about Docker in the middle of the night 😛 he literally has a grey beard, is nearly 70 years old and prefers to program in his own programming language that he developed back in the 1980s. If he is unable to use that language, he thinks C is the way to go for his projects. I prefer using Clojure for my projects and I definitely don’t mind using Docker or building new software on top of a graph database, or adopt some other newer technology. We just live in two different worlds.


Hey, I have a gray beard too 🙂, and I'm not even in my 40es yet 😛

✔️ 1
😁 1

what am i doing wrong ? i'm in my 50s and my beard remains resolutely ginger!


my hair is grey and my goatee is too, but can't grow a proper beard and I'm 50. Where am I going wrong?


@orestis Get out of here, old man. @U0524B4UW This is acceptable. @U0525KG62 Get out of here, little kid.

😂 1
👴 1
👶 1

Hey, I barely have a grey hair in my head. One day I'll shave and I'll pass for 30 again


one day I'll shave my head and look like a boiled ham

😂 3

I’m in my fifties and wish I could grow a beard.


having long been using docker on the server, via k8s (well, it may not be docker anymore... but some sort of containerisation) server-side, we just introduced a nix+docker option for our dev environments (nix for the dev tools with some tricky version requirements, and a docker composition for the services our app depends on) - and it's a breath of fresh air. it's all declarative and version-controlled, uses the same base images as we use on the production containers, and best of all - new starters can reliably follow a simple script to get up and running


I like Docker for the declarative part, but I don't use it for daily development, I run all the needed services manually, since on macOS the overhead is considerable. If the overhead was lower I'd use it.


we're all on macos too @orestis - running cassandra, kafka, gnatsd in a docker composition - and haven't really noticed any practical difference in resource issues since switching to docker based services - i guess the memory-compression (best o/s feature in recent memory) and SSD-based VM is taking care of it acceptably :man-shrugging:


Good morning



😁 5

Took that this morning and was going to post it anyway but these jokes just write themselves


So much CI infra is possible with Docker that I am happy to have it around to reduce the pain. It's like morphine. Similarly, I avoid using it on myself.


I’m curious about which parts of Docker are more declarative than a babashka thing:

(def deps [{:name "psql" :version "1.12"}, {:name "yarn" :version "tomorrows"}])

(doseq [dep deps] 
  (install! dep))


I guess my point is that I perceive the opposition arguing Docker has this one benefit, hence Docker All The Things.


I see that there are valuable things that Docker brings to the table, but perhaps one could look at those things in isolation, see if there are other solutions to the same problems that might be better suited.


I never stated that other things cannot be declarative. The point is that declarative is usually better than imperative. If you are able to do things declaratively without Docker, then by all means do so.


Like, if I want a declarative way of installing deps on my machine, well, I don’t need Docker for that, even though Docker provides that.


Docker is for setups of situated software, not application dependencies.

☝️ 1

systemd is declarative - this is a good thing. I also use systemd. One doesn’t exclude the other.


The two main downsides for me are: 1. Yet another tool that I need to learn 2. I need to run a virtual machine on my box (since I’m running macOS). I can see that this is less of a problem on a linux system. And, I see not too many benefits over a well scripted bash/babashka installation script.


Yes, these are the common talking points. I hear them every week 😛


Perhaps you should have some sympathy with your users?


Like, your greybeard doesn’t care about what is convenient for you. He cares about what is convenient for him


My users don’t give a flying f*ck about what technological issues I may or may not have. They just want to have their job done.


I am not forcing Docker on anyone.


We don’t even work on the same projects. He just complains to me because he knows I use Docker voluntarily in some of my projects.


Ok, then I don’t see why he’s doing what he’s doing.


I’ve also slowly accepted that the ship has sailed.


He's probably doing it because Docker is messing with his usual workflow and that's frustrating to him. Then when he hears me saying that I used Docker for such and such at a meeting, he thinks he needs to concince me it is actually a bad thing because it seems like a huge regression to him. But like you say, the ship has sailed. System configurations are routinely made available as Docker setups nowadays.


What's also nice about Docker-like things, is that you can make an environment and push it to cloud stuff like AWS and (Heroku-like thing). now uses nix to create environments.


I usually like to have a database or so running in Docker, as it won't pollute my globally installed things, e.g. a very specific version of postgres


and ElasticSearch, Redis, things like that


docker does provide isolation for tools that don't have good dependency systems where things will clash (we are soooo lucky that the JVM classpath works the way it does), it also provides isolation for versions of external services you might use (and need to change)


I usually run my Clojure REPLs on the bare machine as I don't want to have any problems with ports, filesystems, etc


I am by no means a Docker expert, but I like that I go away for 2 years and then go back and take a look at some old Docker configuration I made and quickly get an overview of how it’s wired together:


Those compose files are simple to read and once you’ve seen a few you realise they all do the same things. They list which ports are being used to communicate, how components depend on each other, and where data is written to. And other people who know a bit of Docker can read them too and quickly get at what’s going on. That is the primary advantage of the declarative part for me.


But what are you saying in this file? You have 20 lines of YAML saying nothing?


In my previous job we depended on a very specific version of postgres with a postgres extension (that only works on linux). So Docker was about the only thing that worked anywhere for this.


yup indeed. Welcome to academia.


@slipset That yaml refers to other Dockerfiles in parent directories. Those contain the interesting bits.


The interesting bits being shell scripty bits (yes I’m facetious 🙂


I’ll stop now 🙂


I’m on both sides of this discussion. With Clojure-projects, there’s really no reason to put stuff in Docker unless you want to run it with some cloud-provider. For nearly anything besides Clojure, I would just start with Docker, because you know it will fail ones you start mixing boxes


I think the point is that enterprise (Clojure) projects are rarely just Clojure, but often involve databases, search, key-value store, etc, etc

☝️ 3

Setting up multiple database instances on the same machine is much, much easier with docker/containers and I can't think of a simpler/easier way to simulate an entire network of apps on a single machine (this is a godsend for self-hosting). Once you learn some basic concepts it becomes really convenient. Docker in particular definitely has its issues and I'm far from an advocate of let's put docker/containers everywhere but I can't deny the usefulness of containerization. (issues: e.g. security is not great but it's improving, I think, and you share the same kernel as the host, so certain stuff becomes difficult to do, like connecting to a VPN from within a docker container)


I've rarely "needed" docker for enterprise work. Databases have secure internet connections and dev / test environments are easy to spin up in the cloud, along with a nice deployment pipeline if required. I'd prefer use the cloud to spin up things on demand and avoid having a whole bunch of docker images locally (one of the benefits of the cloud) There are specific cases where docker can add value, but I have also seen the use of docker due to some design, architecture or product choice that adds complexity for little reason. Or on a few occasions people just following the dockerize everything trend. I was asked once to get a web designer to ship the static web pages in a docker file so we could easily deploy it (I left that company soon after) The biggest challenge I find with docker is monsters of yaml files that arent documented and no one one the team understands how they work or what happens when they are broken. While you can write clear and simple docker files, I have experienced a lot of them that made me want to cry...


It's great for complex/fast evolving tooling as well. A lot of orgs have onboarding that consist in installing/relying on a bunch of images

👆 1

One of the core advantages that others may not have mentioned, is that all major cloud providers all offer the ability to run docker containers pretty easily. They provide services such as healthchecks/monitoring, auto scaling, resiliance (to ensure at least X containers are always running), integration and routing with their loadbalancers, service discovery and so on that makes for a very compelling reason for the containerisation of stuff. Our build pipelines builds an uberjar, adds in some prometheus exporters, some environment variables, defines which JVM to use and deploys it to a cloud registry. Then, the service is simply told to pull the latest docker image and redploy itself with X replicas and such and such monitoring and alerting. We don't have to worry about ensuring our VM images are up-to-date, or that the JVM is consistent across all VMs and so on - the cloud provider does it all.


I know also, for example with Amazon, you can say, here is a docker image - I want to have at least 2 cpus for it and 1G of memory - go do it, and Amazon will just take care of it for (the service is called Amazon Fargate).

💯 3

which version of the JVM play well with containerisation? You have to set some of the heap settings as well IIRC


We are using Eclipse Temurin 17 quite successfully 🙂


We don't mess with any heap settings at all 🙂 We just run the JVM vanilla (with a slight modification to use the /dev/urandom device to speed up the startup)


Very old version of Java 8 (and prior) were not aware of being run in a container (they basically assumed all the memory and CPU cores of the host was it's memory and cores to play with). That was fixed (if I recall from memory) in Java 10 and backported to later versions of Java 8.


So since then (for a few years now), Java knows if it's running containerised or not.


> One of the core advantages that others may not have mentioned, is that all major cloud providers all offer the ability to run docker containers pretty easily. I did mention that here: but thanks for the elaboration! Monitoring etc. is important.

👍 2

My main gripe with Docker is I’ve found it to be unusably slow (even for dev).


It’s annoying. I don’t prefer it. But I would use it for things like dev dbs if it weren’t gawd-awful slow.


So I end up spinning up my own brew install dbs anyways, even though there are docker compose commands available.


@dharrigan I spent the last four weeks tuning java and container heap sizes. Unfortunately it's still nontrivial even with the latest java 8 vms when some of your apps use huge stacks, other load 100k class files and others use more than 10G non heap memory (for reasons yet to know)


@potetm what part of docker is slow for you? When running it's "just cgroups" served conveniently, isn't it?


I have no idea what crazy docker for mac does.


Ah! Therein lies the problem.


What I know is running a stack of Elasticsearch/postgres/redis on mac is laughably slow.


Or at least was a year ago.


In my local Linux development machine, I spin up 6 containers (two elasticsearches would you believe! one old one new). It's pretty darn responsive for me 🙂


I also use local-stack (if you haven't seen it, it's the proverbal bee's knees) for emulating the amazon infrastructure locally!


Good morning! A nice lively discussion! I find docker nice for the consistency it provides. Everyone on a team can be working with the same consistent services (that match production). Quickly. And they can quickly switch to another set of services for another project. I also find it really useful to test whatever in variant x of linux. And, yeah, it is sucky slow on macOS.


I wonder, I have a mac mini m1 with docker installed which now natively supports Apple Silicon. A quick check reveals that Elasticsearch, Redis and Postgres all have ARM64 variants of their images.


That should be a whole lot quicker than Docker on Intel Macs.


I can't verify that natch 🙂


Me neither! Not on my 2013 iMac! simple_smile


I think that docker on intel mac has to spin up a vm to emulate a linux environment which could possibly contribute to the slowness.


When Docker for Mac got annoyingly slow for me I switched to using Nix and foreman. My current dev env just contains a wrapper script pointing foreman to a Procfile that starts our two apps and all the DBs, and a Babashka script which can teardown and initialize the databases. So I just do noscoforeman start and then cider-connect-clj . Nothing installed globally, all the databases are pointing their data to a project-local directory. I'm super happy with it and enjoy exploring all the possibilities of Nix (which can also build tiny Docker containers containing nothing but your app 😉). But it also doesn't require Nix to use it, as long as you have all the executables present on your PATH you can just use foreman to start and initialize everything. Especially if I'm changing multiple codebases at once this is super convenient because I don't have to deal with Docker volumes or complex Dockerfiles: it's all just running on my machine 🙂


Ah I didn't realize it was so hard to search for. This one:


I also took the time this week to package up my development environment into a little reusable tool that generates a Procfile and gives you a command to run it:

👀 1
Jakub Holý (HolyJak)06:06:51

It would be helpful to expand the readme a little more, to explain the problem it is trying to solve and what roles nix and foreman play in the solution for us that have heard little about them before 🙏 Perhaps another example with multiple services would also be helpful to a newcomer. The example directory has only nix config. I would expect it to also have a foreman Procfile. I guess I do really not understand the roles of nix, uno, and foreman here. Ah, I see, Uno will produce the Procfile based I guess on the packages installed and commands configured.


Thanks for your interest! For sure, it's pretty barebones at the moment as I figure out how it should actually work. I plan on creating a more elaborate example and explaining some motivation and capabilities and then doing an actual release. And yeah, the Nix config produces a Procfile when you run uno start and then it basically does foreman start --procfile=$procfile. The disadvantage here is that it's no longer possible to use the config without Nix, but it also enables some other things like running multiple versions. I kind of intend this tool as a soft onboarding to Nix :)

👍 1
Jakub Holý (HolyJak)07:06:47

Ok! I will be watching #announcements then 🙂


I guess it's not a Clojure related announcement— I'll share in #nix

👍 1
Jakub Holý (HolyJak)07:06:49

If you remember then please ping me upon the release 🙏


Will try to! :)


@josh604 I know some of those words!


No but seriously, Nix sounds interesting and space age.


AFAIK isn’t the issue with Nix that it needs Nix-specific configurations for everything you will need? Or did I misunderstand something at some point? How easy is it to “convert” a project, if you will?


You can do a lot of stuff with Nix but the most basic case is "download a program and put it on my PATH", which you can do with nix-shell -p somePackage which starts an ephemeral shell with the package's binaries available. If you want to make it more permanent, you can create a shell.nix file (or a flake.nix file but this is currently experimental). Then running nix-shell will start a shell with all the packages listed in that shell.nix. After that you can use whatever to actually launch the programs you need (like foreman). It's surprisingly boring 🙂


It definitely gets more complex to "convert" a project to Nix if you're going to use Nix for building/deploying/running in production -- but you don't have to start there. I still just use it for my dev env


I see, thank you. Will need to explore more.


Not so easy, but once you have a config / flake, you can share it and from then on, things get better :P

👀 1

There's also a #nix channel

👍 3

nix to me isn't very intuitive and their language syntax I have trouble comprehending still, so I can from experience say it has a bit of a learning curve.


But the concept is very promising


You can do a lot of stuff with Nix but the most basic case is "download a program and put it on my PATH", which you can do with nix-shell -p somePackage which starts an ephemeral shell with the package's binaries available. If you want to make it more permanent, you can create a shell.nix file (or a flake.nix file but this is currently experimental). Then running nix-shell will start a shell with all the packages listed in that shell.nix. After that you can use whatever to actually launch the programs you need (like foreman). It's surprisingly boring 🙂