Migrating Jenkins from a Server to a Container

Jenkins setup:
Jenkins: 2.319.2
OS: Linux - 4.4.0-1161-aws
Java: 1.8.0_382 - Private Build (OpenJDK 64-Bit Server VM)


Right now, my Jenkins is running on an Ubuntu 16.04.7 LTS.
I want to migrate to Debian 12 and use a container instead (maybe docker compose?).

I have some questions,
Can you guide me on how to migrate without losing data or configurations?
I suppose it is better to use containers right?

Thank you very much for your help!

You can get your existing plugin set from your JENKINS_HOME/plugins there’s hpi or jpi files there.

There’s a couple of ways to get Jenkins in a container but a summary of how I would do it is the following:

  • Create a tar file with your plugins in a folder that would match whatever Jenkins image you’re using.
  • Create a Docker image with only the Jenkins version that matches your old Jenkins.
  • FROM that inage build another image adding your tar of plugins.
  • Launch a container with a copy of your JENKINS_HOME.

This somewhat manually created image can be the starting point of your containerized Jenkins controller.

Initial testing

Jenkins config rescue · Issue #27 · samrocketman/blog · GitHub you can create an archive of your JENKINS_HOME without your build history. This will be pretty small because the build history, war, and plugin data are the largest part of JENKINS_HOME.

Your docker image should already have the war and plugin data so really you only need the config for testing.

Use this opportunity to familiarize hosting Jenkins in a container from the docker image you built.

Testing for upgrades

Once you have Jenkins operating you’ll want to plan for regular upgrades. You’ll want to migrate Jenkins, your job config, and build pipelines to “as code”. Once everything is as code you’ll have the following benefits.

  • You can provision reliable prod-like test environments
  • Not all plugin upgrades are perfect. Everything as code means you can regenerate the config for your prod infra. This is useful when an upgrade fails to migrate config. You just overwrite it.

Hi @samrocketman ,
Thank you very much for your time.

I re-read you and tried several times before writing you.

I created a Docker-compose file,

    image: jenkins/jenkins:2.319.1-jdk11
      - "8080:8080"
      - jenkins_home:/var/lib/jenkins/
    image: jenkins/ssh-agent

I took this YML from docker/README.md at master · jenkinsci/docker · GitHub

And ran
docker compose up -d and docker compose down

So I created my Docker volume on

Previously created a new EBS volume that attached to the new instance in /mnt
I removed everything on _data (The new docker volume) and created symbolics links from my old EBS volume.

ln -s /mnt/var/lib/jenkins .

So I have everything from my old Jenkins.

The result is that I’m getting the installation process.

I need to keep all the data, even previously ran, configuration.

What do you think, should I use docker or docker compose?
Can you share with me some good docs?

By using an internal volume you don’t have access to the data unless a container is running by normal means.

You should use a folder based volume where your own data is mounted into the service. Also, you appear to be using the Jenkins official docker image directly meaning you haven’t created your own image with plugins based on it. It does support a plugins.txt but initially what you need is the specific versions of plugins for your server and not the latest.

You’ll want to create a Dockerfile that derives FROM the official docker image and then proceeds to install your specific set of plugins via tar. That’s just the starting point and later you can upgrade Jenkins using plugins.txt.

I also have my own way of Jenkins in Docker but it is niche and not officially supported by the community. It installs plugins via the Jenkins maven repo.

Since you are on AWS you can publish your image to ECR and use an instance profile with an IAM policy enabling read. I have a script which you can use to attach volumes by tags on boot using cloud init, cloudformation init, or plain UserData.

Your startup process when autoscaling would basically be:

  • Log into ECR
  • Attach EBS volume
  • Start your Jenkins service which should assume it already has access to your data and Docker has ability to download from ECR.

Alternate, my preference is to bake AMIs using packer with everything preloaded onto the agent. This autoscaling enables indefinite cold failover. Here’s a diagram of the approach and I also have another diagram generically explaining how packer works.

The benefit of using Jenkins with Docker is it enables more testing versatility. I can provision a similar Jenkins instance in AWS or even just on my laptop for experiments.

Hi @samrocketman,
Thanks for all your replies,

Why you say I’m using an internal volume? You can see this in the docker-compose file
- jenkins_home:/var/lib/jenkins/

You are right about

Also, you appear to be using the Jenkins official docker image directly meaning you haven’t created your own image with plugins based on it.

I actually got stuck in this step,
I’m using this guide Docker

and at this point my dockerfiles look like this.
Disclaimer: The version of the image is not same, just testing.

FROM jenkins/jenkins:2.426.2-jdk17 as builder
USER root
RUN apt-get update && apt-get install -y lsb-release
RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc \
RUN echo "deb [arch=$(dpkg --print-architecture) \
  signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
  https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
RUN apt-get update && apt-get install -y docker-ce-cli
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean docker-workflow"

FROM builder

# Switch to the root user to install plugins
USER root

# Install your specific set of plugins using tar or other methods
RUN apt update && apt install -y tar
COPY ./plugins.tar /tmp/plugins.tar
RUN mkdir -p /var/jenkins_home/ref/plugins
RUN tar -xvf /tmp/plugins.tar -C /var/jenkins_home/ref/plugins

I have the feeling that the structure of the Jenkins inside the docker isn’t like a normal installation.

About your second message, I agree with you, I have some containers runnion on ECS/ECR.

This is more of a docker concepts question; I have been glossing over a lot of deep details with short descriptions so I’ll expand a bit. This is going to be a long one because I will try not to assume much with this reply.

What you point out with volumes jenkins_home:/var/lib/jenkins/ is an internal volume because that’s what your docker-compose.yml file says.

When you have volumes listed it is the equivalent of docker run -v jenkins_home:/var/lib/jenkins/ and in your case jenkins_home is the Volume ID.

I know it is a volume ID because of this section of your docker-compose file at the root keys


This is the equivalent of docker volume create jenkins_home. So your compose file is saying:

  • Create a volume and
  • Create a container mounting the volume I just created to path /var/lib/jenkins

You have existing files on an EBS volume. Meaning you have files already available outside of Docker. So you don’t want to use an internal Docker volume; instead you want to mount a folder-based volume. For example, if you want to mount your EBS volume to /mnt/jenkins_home folder path then you would create your container with docker run -v /mnt/jenkins_home:/var/lib/jenkins

In your compose file, it would be

      - /mnt/jenkins_home:/var/lib/jenkins

You would also remove the top level volumes key because you no longer need an internal docker volume.


Another problem you’ll encounter is permissions. The permissions of your jenkins_home EBS volume must match the user and group running inside of the Jenkins container even if that same user does not exist on the host.

For example, I run my Jenkins images with alpine instead of official Docker because I like how I do docker better (my opinion of coarse). Because of that; I must match the jenkins user inside of the container (uid 100 and gid 101) with the file structure. In my specific case, I would chown with numeric IDs with the find command. For example,

find /mnt/jenkins_home \( \! -uid 100 -o \! -gid 101 \) -exec chown 100:101 {} +

Matching permissions in your case

In your case, you’re using the official Jenkins docker image so you must first discover the UID and GID of Jenkins. You can do this by running the official Docker image by overriding the ENTRYPOINT and the CMD. Here’s the docker command.

$ docker run --entrypoint '' --rm jenkins/jenkins:2.319.1-jdk11 id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)

The official Jenkins docker image uses UID 1000 and GID 1000. So you must make sure that your Jenkins files permissioned for UID/GID 1000 on your EBS volume before starting Jenkins.

Learn about the official Docker image

You first need to inspect the Jenkins official image to learn about how it is laid out. You can use dive utility but in my case I’ll give you examples that only use available Docker commands requiring no extra software.

docker inspect jenkins/jenkins:2.319.1-jdk11

Relevant sections include

volumes section

"Volumes": {
    "/var/jenkins_home": {}

env section

"Env": [

cmd section

We know it is unused because of

"Cmd": null,

entrypoint section

Which shows you the shell script used to launch Jenkins.

"Entrypoint": [

How to inspect the shell script

You must override the entrypoint and pass a shell to CMD when launching an interactive terminal so that you can inspect the contents of the Docker image. This lets you use standard GNU utils to browse it.

docker run -it --rm --entrypoint '' jenkins/jenkins:2.319.1-jdk11 /bin/bash

Not all Docker images have /bin/bash or even utilities available. However, the official Jenkins docker image does.

Inspect the shell script and a support script that gets sourced:

less /usr/local/bin/jenkins.sh
less /usr/local/bin/jenkins-support

So in a nutshell how this works is if you have files in /usr/share/jenkins/ref then it will copy them to your JENKINS_HOME on startup.

What have we learned through inspection?

  • JENKINS_HOME in official docker image is /var/jenkins_home. You must mount your files to this location.
  • When baking plugins you can put your own set of plugins into /usr/share/jenkins/ref/plugins/*.jpi

Finishing up

Create your plugins.tar

cd /to/your/jenkins_home
tar -cf /tmp/plugins.tar plugins/*.jpi

Create your Dockerfile

FROM jenkins/jenkins:2.319.1-jdk11
ADD plugins.tar /usr/share/jenkins/ref/

Move your plugins.tar into current dir

Move your tar file into the current directory:

mv /tmp/plugins.tar ./

Create your new docker-compose.yml file.

      context: .
      dockerfile: Dockerfile
      - "8080:8080"
      - /mnt/your/ebs/volume:/var/jenkins_home

Starting your service

First verify your files are in place.

$ ls -1

Start your service

docker-compose up -d

What will happen is this will build a new Docker image packaging your plugins into a new Docker image before starting your Jenkins service. It will mount your EBS volume (assuming proper EBS filesystem permissions UID 1000 GID 1000) to the JENKINS_HOME inside of the container.


This exhaustive explanation is the same suggestion I gave in my first post but in more detail with fewer assumptions of docker familiarity. Rather than following guides it is best to inspect Docker images directly to check your own assumptions. You can’t assume just because a guide mentions the official Jenkins docker image that it will work. Jenkins changes fast and sometimes that means the infrastructure.

Why /var/jenkins_home and not /var/lib/jenkins. Jenkins is multi-platform and multi-OS. /var/lib/jenkins is where the RedHat RPM installs Jenkins HOME. However, this is not the same location as Jenkins on Debian or Ubuntu. And as we’ve discovered it is also not the same as Jenkins running in the official Docker image. So really you need to keep in mind Jenkins concepts (like the JENKINS_HOME) and don’t assume you know where things are organized. I walked through how to do the discovery process for the official Jenkins docker image.

I’ve never used the official Jenkins docker image (only my own). So a lot of my explanation is just reading the source as I’ve explained it; I don’t operate it.

@samrocketman, Thanks man! I appreciate your patience and effort.

Your explanation gave me a lot of insights, thanks again and G-d bless you.

As you already noted, I’m new with Dockers too, and didn’t use the correct tools to understand the usage/case of this docker image.

Just for the sake of sharing,

  1. Docker inspect - before using an image of someone else, it is preferable to see what’s inside.
  2. Docker logs - To see the outputs of containers

As you said, the EBS volumes should have the same uid/gid as the Jenkis user, my container restarted every time, docker log showed me the lack of permissions.

So what I did,

run docker run -it jenkins/jenkins:2.319.1-jdk11 bash

and checked the Jenkins’s id, and as you already told me, it was set as 1000:1000.

After verifying the ID, ran chown 1000:1000 /mnt/var/lib/jenkins
and checked the container and the volume with this command:

docker run --name jenkins-prod --restart=on-failure --detach --network jenkins --publish 8080:8080 --publish 50000:50000 -v /mnt/var/lib/jenkins:/var/jenkins_home jenkins/jenkins:2.319.1-jdk11

Entered to the private IP (btw I use Nginx as a reverse proxy), and I get all the configuration/logs/credentials just everything!

So, Sam, let me ask you,

  1. In Dockers, I know two types of volumes, persistent and non-persistent; when I use docker volume create, it is a persisting volume that is outside the container (but you called it “internal Docker volume;”), right?
  2. Should I use the /mnt volume or copy/paste it into a new volume created with the docker volume _data folder?
  3. Do you know how to automount this docker after the server reboot?
  4. How should I upgrade this image? let’s say I migrate to 2.319.1-jdk11 How can I upgrade it?

Sam, Thanks a lot!

Thanks a lot for the time you invested in your answers, @samrocketman . :pray:

I feel your detailed guide deserves to be at least a blog post on jenkins.io or even be part of the existing documentation.
What do you think?

@poddingue, I Completely Agree with you.
@samrocketman Again, man, Thank you very much for your help! :love_you_gesture:

Sure, I can migrate them to help users migrate; where ever you think it will be most effective

1 Like

I believe opting for a blog post would offer a straightforward solution. Given the extensive overhaul of docker-related content on jenkins.io, pinpointing the exact location in the official documentation might be time-consuming. If you’re considering creating a blog post on jenkins.io, I’m here to assist you. Just let me know if you need any help.

1 Like

It is internal to Docker itself; within docker data /var/lib/docker. It will persist assuming your host has no issues in AWS but because AWS marks system volumes to be deleted upon instance termination it is not persistent in the AWS cloud.

As noted in previous point, continue to use an EBS mount so that your data survives instance termination.

Two ways depending on OS

  • Integrate docker compose with Linux startup (systemd or sysVinit depending on OS)
  • Docker has an autostart capability.

However, these solutions will not survive instance termination. If using an autoscaling group you need to configure via UserData or cloudformation::init; automount EBS and configure EBS automount in /etc/fstab; configure systemd service and set it to boot after network filesystems; and then start.

You need to create a plugins.txt comprising of the plugins you need. In terms of upgrading, my personal project jenkins-bootstrap-shared has upgrade well documented. I don’t use the official docker image so once you have created the plugins.txt for it I’m not sure. I recommend reading the official image documentation in the project GitHub README.

My upgrade process

In my case (not official), I create a mininal versionless plugins.txt. Bootstrap latest Jenkins and install the minimal plugins, and once plugins are installed immutable versions are pinned in dependencies.gradle.

If the official docker image does not define an upgrade process I would maintain two versions of plugins.txt. One version with only a minimal set of required plugin IDs (they will add or remove transitive plugins in a fresh install), and then capturing a versioned plugins.txt for docker image building. I have scripts which enable capturing plugin IDs and versions so you can adopt it for this purpose.