I am running Jenkins in Kubernetes, specifically using the jenkins/jenkins: [version] docker image in the pod spec to run the Jenkins instance. Is there any documentation or steps on how to upgrade the Jenkins core version when running it in Kubernetes?
The automatic upgrade in the UI does not work, because when the Jenkins pod restarts, the image downloads the war file with the version in the image tag. So Jenkins never gets upgraded. I have to manually edit the K8s deployment image to jenkins/jenkins:[version2] to upgrade.
Is this the right way to upgrade ? What happens when there are conflicts between the two versions (like changes to directory structure , for example), and how can I resolve that?
I am running Jenkins in Kubernetes, specifically using the jenkins/jenkins: docker image in the pod spec to run the Jenkins instance. Is there any documentation or steps on how to upgrade the Jenkins core version when running it in Kubernetes?
The automatic upgrade in the UI does not work, because when the Jenkins pod restarts, the image downloads the war file with the version in the image tag. So Jenkins never gets upgraded. I have to manually edit the K8s deployment image to jenkins/jenkins: to upgrade.
Is this the right way to upgrade ? What happens when there are conflicts between the two versions (like changes to directory structure , for example), and how can I resolve that?
Hello @sslk, nice to see you back
I have never ever experimented with Kubernetes in Jenkins (it’s in my loooooong todo list), but I’ve been told by an expert that the way you upgrade is the right one.
I don’t think that will help you with your very specific questions, but have you read helm-charts/README.md at c291155f4ceed58302d6952f2c02b7f560fc71dd · jenkinsci/helm-charts · GitHub ?
Hi @poddingue , thanks for the reply. The link you sent is how to upgrade the helm chart, not the Jenkins core version, which is what I’m trying to do.
Does your expert know how to seamlessly resolve conflicts when doing upgrades to the Jenkins core version?
You should absolutely update the docker image tag that kubernetes is using, not try to edit an existing docker image.
In helm, you can change the controller.tag to point to a new release
(I don’t know why the default is commented out)
Hi @halkeye thanks for the reply. When we update the docker image tag, the current pod (pod A) will terminate and a new pod (pod B) gets spun up with the new image. I noticed that the pod A does not do any clean up before terminating.
When I try to upgrade the Jenkins instance with the upgrade button in the UI, the cleanUp() method ( is called before the pod restarts (I saw the logs). (Note that this upgrade does not work because the docker image tag still has the old core version, but I was just testing what the upgrade button does)
How can I safely clean up Jenkins running on pod A before it terminates?
I don’t know sorry. I believe k8s now has termination callback scripts, you might be able to do something with that, but I don’t know of anything jenkins related that hooks into it.
It’s been a while since this comment… Hi there!
Have you figured it out yet? I mean, on one hand, K8s pods are stateless (unless you have them in a StatefulSet) and the Jenkins pod is treated that way. That’s why it only gets terminated. However, I would expect the pod to run the cleanUp() method as part of the finalizer, which I would also expect to be executed during tear down of the pod in the event of an upgrade. Is that not the case?
I’m in the same position right now, thinking about how I can cleanly upgrade my Jenkins pod.
Cheers!