The “Installing Jenkins - Docker” chapter of the documentation creates a docker network and two containers connected to it. The first container is jenkins/jenkins which AFAIU is the “jenkins controller”. The second container is a docker-in-docker (docker:dind) and I don’t understand its purpose. Could you please demystify it for me?
I know that jenkins can be installed without a docker at all (“Installing Jenkins - Linux”). I also assume that I can use the jenkins/jenkins container by itself (implied by this section). So at first I thought that the docker-in-docker container is used for instantiating “jenkins agents” as containers, but after reading some more chapters in the documentation I’m no longer convinced that it’s the case.
So what is the purpose of this docker-in-docker container and how is this setup different from when the docker-in-docker is omitted?
If Docker in Docker is omitted, then the user would need to configure an agent with those tools installed or would need to configure an agent and a global tool installer that will download and install those tools on the agent when needed.
For a deeper review of Docker containers as agents, see the tutorial video by Darin Pope
To add to this, its a kinda security thing. To run docker agents, you need a docker daemon (that is changing, but not yet stable/etc). You can either use an external docker daemon via shared socket, or TCP connection. Or you can run your own standalone docker daemon.
The problem is that once you have access to the docker daemon, you have root access to that system (ex setting the uid and mounting host volumes, etc). So most people don’t want to expose root to anyone, so if you run a standalone docker daemon inside of docker, the damage that can be done when escaping docker is minimal. This will also prevent anyone lets say “docker export” your jenkins container getting all of its credentials and configs and stuff.
Docker in docker is a confusing thing no matter what.
When I was looking at doing this kind of install (Jenkins running in a container, able to start up dockerized build tools) I read the guide which recommended the docker-in-docker image, but then on that image’s home page it links to an article which explains why you really don’t want to use it and all the bad things that might happen if you do. So I finally was able to work out passing the docker socket to Jenkins and resolve the permissions issue.
My question is why does the handbook recommend the DND approach when it then warns users away from itself, and maybe should the handbook be updated with another option?
Here I was using a build tool image to run ant to compile my code, but I don’t think I could have build the container image from within that build container (that would be docker in docker in docker). So I used a new stage which was not running in the build tool container. However when I first tried this I discovered that Jenkins had created a new workspace for the second agent and my build artifacts weren’t there. So the solution I found was to use a toplevel “agent any” and then put “reuseNode true” in the first containerized stage which would prevent it from creating a new workspace for the first stage. Then the second stage doesn’t have an agent statement so it also reuses the toplevel agent and therefore has the build artifacts from the first stage.
Running Jenkins in a container and having it launch build tools in containers is like the movie Inception. Would be great if the guides covered these things users are likely to run into.
The tutorials use docker in docker because it is a very direct path to illustrate different build tools without requiring that agents must be able to build docker containers.
Pull requests are welcomed with replacement tutorials that don’t require docker in docker. Every time I’ve tried to find a path that will allow the type of flexibility you want (allow docker container builds from any agent), it requires either Docker in Docker or more detailed configuration steps than most first-time readers are willing to use. If you have a simpler path to guide users to that, I’m ready to review it.
One suggestion has been that the tutorials could use a configuration based on docker compose and could store the definition in a GitHub repository. That would still require docker in docker, but would allow multiple agents and a more interesting configuration of those agents.
An alternate would be to return to have the tutorials use operating system installation packages instead of Docker containers. However, that’s more difficult to describe because there are operating system installers for three Linux variants (Debian, Red Hat RPM, SUSE RPM) and a Windows variant (MSI). Docker images allow Linux, Windows, and macOS users to operate with largely the same instructions.
Based on the above very good points I would suggest incorporating some of your previous reply into the guidebook. I would add some language that the DND approach listed there is for a quick and easy introduction/prototype, but not recommended for production where a more detailed config would be advised that doesn’t use DND.
This would help it not to appear that the guide book is recommending DND as a best practice which is how the current form may be interpreted.