Plugin update behaviour with custom container images and Kubernetes persistent volumes

Good Morning All,

I’ve spent a couple of days experimenting with plugin update behaviours for a customer when using the community Helm chart (GitHub - jenkinsci/helm-charts: Jenkins community Helm charts) to deploy Jenkins into Kubernetes (EKS) using AWS EFS to provide a persistent volume for $JENKINS_HOME.

I’m a bit confused about the behaviours I’ve observed. My understanding was that when I mount the EFS persistent volume as /var/jenkins_home then the contents of the container at the same path would be “replaced” by the EFS volume. I’d guessed that there was some form of init (the init pod?) that was copying stuff into EFS upon initialisation. As per the documentation I’ve set initialisation to occur only once as I’m bring in plugins from the container image.

When I did some experimentation I saw inconsistent results however…

In terms of the helm chart values I think the relevant ones are:

controller:
  componentName: "jenkins-controller"
  image: <accountId>.dkr.ecr.eu-west-1.amazonaws.com/<registryName>
  tag: <imageTag>
  installPlugins: false

  # List of plugins to be install during Jenkins controller start
  #  installPlugins:
  #  - kubernetes:3743.v1fa_4c724c3b_7
  #  - workflow-aggregator:590.v6a_d052e5a_a_b_5
  #  - git:4.14.3
  #  - configuration-as-code:1569.vb_72405b_80249

  installLatestPlugins: true
  installLatestSpecifiedPlugins: false
 
  # List of plugins to install in addition to those listed in controller.installPlugins
  #additionalPlugins:
  #  - github:1.36.0
  #  - command-launcher:90.v669d7ccb_7c31
  #  - javax-mail-api:1.6.2-8
  #  - jdk-tool:63.v62d2fd4b_4793
  #  - sshd:3.270.vb_a_e71e64c287
 
  initializeOnce: true
  # overwritePlugins: true
  overwritePluginsFromImage: false

  JCasC:
    defaultConfig: false
persistence:
  enabled: true
  existingClaim: efs-claim-ap
  storageClassName: efs-sc
  storageClass:
  annotations: {}
  labels: {}
  accessMode: "ReadWriteOnce"
  size: "8Gi"
  volumes:
  mounts:

Some of the behaviour observed during testing is unexplained:

  1. Writing a file to /var/jenkins_home/myfile in the container image was not available in the jenkins pod once it mounted the PV as /var/jenkins_home. EXPECTED
  2. Adding a new plugin to the container image using jenkins-plugin-cli is copied to the EFS volume during deletion/recreation of the controller. This is UNEXPECTED and implies there is another process acting to copy file(s) from the container image to the EFS volume.
  3. Updating an existing plugin in the container image to a new version is not reflected in the EFS volume or Jenkins console. This behaviour is EXPECTED as the mount overrides the contents of /var/jenkins_home which includes ./plugins.
  4. Removing the plugin folder from the EFS volume before deleting/recreating the jenkins controller pod causes the plugin folder from the container image to be copied into the EFS volume, bringing across all installed/updated plugins. This is UNEXPECTED.

I have a working upgrade process now, which is to build a new custom image with my desired plugins and then move the plugin folder elsewhere on the EFS volume before recreating the pod.

But I don’t understand why it’s working :frowning: which is bugging me.

yes, on startup /usr/share/jenkins/ref/plugins is copied to jenkins home without rewriting anything

see previous

This is exactly what overwritePlugins value in the helm chart does, it rm $JENKINS_HOME/plugins/ on init so startup can copy plugins over from the image

Hope this is what your asking, you didn’t really ask a question.

1 Like

That’s fantastic Gavin, greatly appreciate the explanation of what’s happening under the hood. I’ll experiment with toggling the overwritePlugins value over the weekend to remove the manual steps from the update process.

Thanks!