We have deployed Jenkins in a Kubernetes cluster using the Jenkins helm chart. Along with upgrading the LTS version, we pass in a txt file containing the list of plugins and the versions we would like to upgrade the plugins to.
This is how the plugins are copied into the JENKINS_HOME directory in the chart, the script containing these commands runs in the init-container:
cp /var/jenkins_config/plugins.txt /var/jenkins;
jenkins-plugin-cli --verbose --war /usr/share/jenkins/jenkins.war --plugin-file /var/jenkins/plugins.txt --latest {{ .Values.installLatestPlugins }}{{- if .Values.installLatestSpecifiedPlugins }} --latest-specified{{- end }};
echo "copy plugins to shared volume"
# Copy plugins to shared volume
yes n | cp -i /usr/share/jenkins/ref/plugins/* /var/jenkins_plugins;
When we upgraded the instance from version 2.332.2 to 2.387.2 and passed in the plugins.txt file with the plugins to upgrade, we noticed that the plugin versions stayed the same. In the init-container logs, I can see that the new plugin versions were downloaded but I’m not sure why the Jenkins instance didn’t get loaded with the new versions. I don’t see any exceptions in the jenkins-controller pod logs, and we haven’t experienced this situation before.
Is there a reason this would happen? This doesn’t happen with all of our Jenkins instances, only some. It’s not clear what difference is causing the plugins to not get upgraded. Thanks in advance.