We have some Jenkins pipeline jobs that run for 2 or 3 hours. Sometimes, the job runs, succeeds, and jenkins hangs.
Jenkins will display the “Jenkins is going to shutdown” message - which was NOT triggered from the Manage Jenkins console.
I then have to kill, kill, kill the job for it to finish - here is what the console ends with:
Once the job has been fully killed, Jenkins goes back to normal, and stops displaying the “will restart” message.
Any ideas on what’s going on?
(oh, and of course, this means the job was not successful, so I end up having to run it again to get the artifacts.)
is it possible that something (docker, kernel, k8s, etc) is saying it ran out of memory and trying to kill it?
I’ve not really heard of jenkins attempting to shutdown without user interaction. A long time ago people were calling process.exit (or whatever) in pipeline which would kill jenkins, but i don’t know if that’d trigger a graceful exit anyways.
Is there some log somewhere that might tell me why Jenkins wants to restart?
Note: Possibly related issue - when we run these long jobs, Jenkins often complains about “more SCM polling activities scheduled than handled”. It’s just polling a BitBucket/Git server that is remote.
Is there any way to see a list of the outstanding polling events?