Jenkins hangs when pipeline completes successfullly

We have some Jenkins pipeline jobs that run for 2 or 3 hours. Sometimes, the job runs, succeeds, and jenkins hangs.
Jenkins will display the “Jenkins is going to shutdown” message - which was NOT triggered from the Manage Jenkins console.
I then have to kill, kill, kill the job for it to finish - here is what the console ends with:
Once the job has been fully killed, Jenkins goes back to normal, and stops displaying the “will restart” message.

Any ideas on what’s going on?
(oh, and of course, this means the job was not successful, so I end up having to run it again to get the artifacts.)

1 Like

is it possible that something (docker, kernel, k8s, etc) is saying it ran out of memory and trying to kill it?

I’ve not really heard of jenkins attempting to shutdown without user interaction. A long time ago people were calling process.exit (or whatever) in pipeline which would kill jenkins, but i don’t know if that’d trigger a graceful exit anyways.

1 Like

Is there some log somewhere that might tell me why Jenkins wants to restart?
Note: Possibly related issue - when we run these long jobs, Jenkins often complains about “more SCM polling activities scheduled than handled”. It’s just polling a BitBucket/Git server that is remote.
Is there any way to see a list of the outstanding polling events?

1 Like

are you running your builds on the built-in agent means directly on the controller?

yes, these builds are run on the main Jenkins box.

It’s now considered as a bad idea to run jobs on the built-in agent.
Please have a look at : Clarification about what can run on agents vs controller? - #2 by halkeye