Cannot install plugins even with jenkins-version flag specified

We are upgrading from Jenkins (containerized, running on Amazon ECS):

  • from: jenkins/jenkins:2.346.3-lts-jdk8

  • to: jenkins/jenkins:2.452.3-lts-jdk17.

  • Our existing Jenkins controller has many plugins installed.

Currently we are making a simple attempt to mimic this upgrade by:

  • Running a jenkins/jenkins:2.346.3-lts-jdk8 with a jenkins_home volume, and installing some plugins
  • Stopping the container
  • Starting a jenkins/jenkins:2.452.3-lts-jdk17 and pointing it to the same volume

However, even when using --jenkins-version 2.346.3, the plugins fail to install with many error similiar to:

package-name (version) requires a greater version of Jenkins (2.387.3) than 2.346.3

A Dockerfile.old to reproduce this:

FROM jenkins/jenkins:2.346.3-lts-jdk8

ENV JENKINS_VERSION=2.346.3
RUN set -eux \
    && jenkins-plugin-cli \
        --jenkins-version 2.346.3 \
        --verbose \
        --plugins \
            ws-cleanup:0.43 \
            htmlpublisher:1.31

Shown above are the current versions of two plugins in use by our actual Jenkins controller - so we expect them to successfully install for a Jenkins image using the same base image with same plugin versions. This is key in our ability to test and experiment with the upgrade.

docker build -f Dockerfile.old -t old-jenkins .

How can we succesfully install (any) plugins into the 2.346.3-lts-jdk8 image?

Also tried the format artifact:version:url as suggested by the jenkins-plugin-cli docs. Same error.

FROM jenkins/jenkins:2.346.3-lts-jdk8

ENV JENKINS_VERSION=2.346.3
RUN set -eux \
    && jenkins-plugin-cli \
        --jenkins-version 2.346.3 \
        --verbose \
        --plugins \
            'ws-cleanup:0.43:https://updates.jenkins.io/download/plugins/ws-cleanup/0.43/ws-cleanup.hpi' \
            'htmlpublisher:1.31:https://updates.jenkins.io/download/plugins/htmlpublisher/1.31/htmlpublisher.hpi'

It appears that the failure occurs because jenkins-plugin-cli only obeys this versioning constraint for the named plugins and not for their dependencies.

Also occurs with --plugin-file:

cat << EOF > jenkins-plugins.yaml
plugins:
  - artifactId: htmlpublisher
    source:
      version: 1.31
  - artifactId: ws-cleanup
    source:
      version: 0.43
EOF
FROM jenkins/jenkins:2.346.3-lts-jdk8

ENV JENKINS_VERSION=2.346.3
COPY jenkins-plugins.yaml /tmp
RUN set -eux \
    && jenkins-plugin-cli \
        --jenkins-version 2.346.3 \
        --verbose \
        --plugin-file /tmp/jenkins-plugins.yaml

On top of this reported issue: more documentation would be appreciated over the recommended way to upgrade plugins during a Jenkins version upgrade.

The documentation at Upgrading to Java 11 states “it is important to upgrade all plugins” [as a final step after the Jenkins/Java upgrade].

Is the idea here that the new Jenkins container, re-using the existing volume for JENKINS_HOME, should start with ‘broken’ plugins and then upgrade them in-place to versions that are compatible with the new Jenkins version? Or, should the plugins directory be wiped first followed by a clean install? More specificity in documentation around this issue would be nice.

Your technique is a very good technique, but it needs a further refinement.

Jenkins 2.346.3 is two years old. The Jenkins update center provides plugin version support information for Jenkins releases from the last 12 months. The update center documentation says:

Jenkins weekly and LTS releases up to a year old are supported; anything older will receive update metadata for the oldest supported releases.

Based on that, your container image that represents the current state of your Jenkins controller needs a definition of all the plugins to install and their versions. You can generate that list from “Manage Jenkins” → “Script Console” with the script that is available in “How to report an issue”. That script is:

Jenkins.instance.pluginManager.plugins
    .collect()
    .sort { it.getShortName() }
    .each {
        plugin -> println("${plugin.getShortName()}:${plugin.getVersion()}")
    }
return

The output of that script can be used in a plugins.txt file or you can list the plugin names and versions on the command line as you’ve done in your example. Refer to “Preinstalling plugins” in the container documentation for more details.

Thanks @MarkEWaite ! Specifying the full plugins.txt with pinned versions of dependencies indeed does the trick. I’m now able to build and run with the existing (old) plugins installed.

So, now we are faced with upgrading to jenkins/jenkins:2.452.3-lts-jdk17.

What is not entirely clear to me from the Jenkins docs is, what is the recommended upgrade procedure for upgrading plugins, as part of a large Jenkins/JDK version upgrade such as this one?

Should we remove the existing plugins/ directory from the currently-used jenkins_home volume prior to upgrade?

To reproduce this situation - create Dockerfile.old:

# Dockerfile.old
FROM jenkins/jenkins:2.346.3-lts-jdk8

# plugins.old.txt contains dump of full plugin:version
# export from Jenkins Script Console
COPY plugins.old.txt /tmp
RUN set -eux \
    && jenkins-plugin-cli \
        --verbose \
        --plugin-file /tmp/plugins.old.txt \
    && mkdir -pv ${JENKINS_HOME}/plugins \
    && cp -r -p \
        /usr/share/jenkins/ref/plugins/. \
        ${JENKINS_HOME}/plugins/.

Build and run as:

docker build -f Dockerfile.old -t old-jenkins .
docker volume create jenkins_home
docker run -d -p 8080:8080 -p 50000:50000 \
  -v jenkins_home:/var/jenkins_home \
  --name jenkins \
  old-jenkins

This will provide a Jenkins 2.346.3 controller running on JDK 8 with the working (old) plugins installed.

Now we are faced with running a new container based on jenkins/jenkins:2.452.3-jdk17 that uses a mount for the existing jenkins_home volume.

For instance,

docker container stop jenkins
docker container rm jenkins

Now, cp plugins.old.txt plugins.new.txt, and remove the pinned :version specifiers from plugins.new.txt to allow the versions to float to the latest.

Then create create Dockerfile.new:

# Dockerfile.new
FROM jenkins/jenkins:2.452.3-lts-jdk17

COPY plugins.new.txt /tmp
RUN set -eux \
    && jenkins-plugin-cli \
        --verbose \
        --plugin-file /tmp/plugins.new.txt \
    && mkdir -pv ${JENKINS_HOME}/plugins \
    && cp -r -p \
        /usr/share/jenkins/ref/plugins/. \
        ${JENKINS_HOME}/plugins/.

Build and run as:

docker build -f Dockerfile.new -t new-jenkins .
docker run -d -p 8080:8080 -p 50000:50000 \
  -v jenkins_home:/var/jenkins_home \
  --name jenkins \
  new-jenkins

Then, docker logs jenkins will show many errors such as the following:

Failed Loading plugin Mina SSHD API :: Core v2.12.1-101.v85b_e08b_780dd (mina-sshd-api-core)
Update required: SSH Credentials Plugin (ssh-credentials 305.v8f4381501156) to be updated to 326.v7fcb_a_ef6194b_ or higher

These errors make sense since the plugins/ directory from jenkins_home Docker volume eclipses the plugins/ that are copied to ${JENKINS_HOME}/plugins/ as part of the declarative Docker image build process. In other words, the plugins that we attempt to pre-install into the new image are overridden by the existing old plugins – and the old plugins raise errors since they are incompatible with the new Jenkins version.

I believe that you can use the concept of an “override” to force the updated plugin versions to replace the existing plugin versions on startup. The container image documentation says:

When the Jenkins container starts, it will check JENKINS_HOME has this reference content, and copy them there if required. It will not override such files, so if you upgraded some plugins from UI they won’t be reverted on next start.

In case you do want to override, append ‘.override’ to the name of the reference file. E.g. a file named /usr/share/jenkins/ref/config.xml.override will overwrite an existing config.xml file in JENKINS_HOME.

There is also a pending pull request to the container image documentation that might offer some additional insights

Not quite sure if I’m following. Would this suggest adding a .override extension to each file, recursively, under /usr/share/jenkins/ref/plugins/?

Here is what we have currently, which, without extensive testing, seems to be working:

Step 1 - mimic the existing Jenkins with volume and outdated plugins:

docker volume create jenkins_home
docker run -d -p 8080:8080 -p 50000:50000 \
  -v jenkins_home:/var/jenkins_home --name jenkins old-jenkins

Perform initial admin login to jenkins, create user, verify outdated plugins are installed.

Step 2 - remove existing plugins directory from volume.

docker container stop jenkins
docker run -it --rm \
  -v jenkins_home:/var/jenkins_home \
  --name shell \
  alpine
# and rm -rf /var/jenkins_home/plugins

Step 3 - run new Jenkins

docker container rm jenkins
docker run -d -p 8080:8080 -p 50000:50000 \
  -v jenkins_home:/var/jenkins_home --name jenkins new-jenkins

Step 4 - verify that new plugins are installed at localhost:8080.

Yes, that’s what it is suggesting. That makes it clear that you intend to overwrite existing files during container startup.

Okay, thanks.

I see that the Helm chart offers an overwritePlugins flag. It would be nice to mimic this functionality for non-K8s deployments of containerized Jenkins - for instance, through use of an environment variable or something else, which jenkins.sh ENTRYPOINT picks up and knows to overwrite existing plugins on the volume, with those sitting in /usr/share/jenkins/ref/plugins/ that have been installed as part of the image build process.

For now, we are going to proceed with the intermediate step of rm -rf plugins/* from the volume, as is done by the K8s deployment, prior to startup of the new container.