Hi hi
-
Environment:
Plugin name: kubernetes-plugin
Plugin tags:
- 3646.va_b_469a_7666b_7
- 3651.v908e7db_10d06
Jenkins version: 2.332.2
We observed the following behaviour:
- create a new job in Jenkins and specify an invalid pod template (e.g. use more memory than is currently available in your resource quota)
- start a new build for this job
- Jenkins will create a new /var/jenkins_home/nodes/$pod/config.xml in its filesystem
- the Kubernetes API will reject the pod as expected
- Jenkins will retry to create the pod as often as a new executor is available (if you have 10 executors, it will retry ten times per second)
- now trigger this job multiple times so that the build queue increases (e.g. 50 times)
- Jenkins will again create the config.xml files but now it will no longer clean up any failed
../nodes/$pod/config.xml
files - after a while, the number of config.xml files from failed builds has increased to an unreasonable number (in our case: 138 thousand)
- once you restart Jenkins, it will try to load all nodes from disk, which will not succeed in time (it takes a while to load 138k obsolete configs)
We are severly impacted by this issue and our Jenkins master fails to restart due to OOM error since it tries to load all (undeleted) nodes config.xml files form the disk. Everytime we have to clean the node config.xml files manually to bring the Jenkins up and running.
Does anyone knows any workaround until we have a permanent fix for this issue.