Pod Issue on RKE2 cluster

Hello Folks,
We are working on Jenkins migration from one Datacenter to another, now as part of that we are facing issue 2 issues on new server,

  • Pod don’t get deleted even after builds are completed
  • Auto creating jnlp-agent pods & will be there in Terminating state

Version which we are using on new server as below:

  • Environment: Rocky Linux 9.5 (Blue Onyx)
  • Jenkins: 2.462.2
  • Rancher version: 2.10.4
  • kubernetes-client-api:6.10.0-240.v57880ce8b_0b_2
  • kubernetes-credentials:174.va_36e093562d9
  • kubernetes:4253.v7700d91739e5

Version which we are using on old server as below:

  • Environment: Ubuntu 20.04.6 LTS
  • Jenkins: 2.462.2
  • Rancher version: 2.6.4
  • kubernetes-client-api:6.10.0-240.v57880ce8b_0b_2
  • kubernetes-credentials:174.va_36e093562d9
  • kubernetes:4253.v7700d91739e5

Scenario_1: Auto creating jnlp-agent pods & will be there in Terminating state

  • We are using inbound image: 3283.v92c105e0f819-1-jdk17
  • Ideally, if we are using inbound agent then it should not generate jnlp-agent pods until n unless you defined it in Jenkins cloud configuration/ config.xml file
  • Jenkins cloud configuration as below
  • Just to note here, everything is same on old & new server (cloud configuration is exact same), but we don’t have this issue on old cluster

Scenario_2: Pod don’t get deleted even after builds are completed

  • Now in this scenario, old pods are not getting deleted, even after builds are completed, due to which it consumes lot of unwanted space & because of which new pods are not able to spin up due to insufficient CPU/memory.
  • Note: it is working fine on old rancher cluster. Attached screenshot where it didn’t deleted pod even though build is completed (It still will be in terminating state/Running state/Not Ready state)

Could you please assist on same
Thanks,
Snehal

Hello Folks, we’d really appreciate your help with this issue. Can anyone please assist?

One I immeadieatly see is:

You are using an very outdated agent image ( more than 3 years old) You need to switch to jenkins/inbound-agent instead of anything with slave in it.

Then, as the rancher version changes change the supported k3s versions, maybe you also need to update kubernetes plugins. You also really should update jenkins to recent version.

Third thing: I hope you are just using rancher-managed downstream clusters, not using the rancher cluster directly for any non-rancher workloads.Text