Jenkins hangs when pipeline completes successfullly

We have some Jenkins pipeline jobs that run for 2 or 3 hours. Sometimes, the job runs, succeeds, and jenkins hangs.
Jenkins will display the “Jenkins is going to shutdown” message - which was NOT triggered from the Manage Jenkins console.
I then have to kill, kill, kill the job for it to finish - here is what the console ends with:
2022-12-06_10-23-48
Once the job has been fully killed, Jenkins goes back to normal, and stops displaying the “will restart” message.

Any ideas on what’s going on?
(oh, and of course, this means the job was not successful, so I end up having to run it again to get the artifacts.)

1 Like

is it possible that something (docker, kernel, k8s, etc) is saying it ran out of memory and trying to kill it?

I’ve not really heard of jenkins attempting to shutdown without user interaction. A long time ago people were calling process.exit (or whatever) in pipeline which would kill jenkins, but i don’t know if that’d trigger a graceful exit anyways.

1 Like

Is there some log somewhere that might tell me why Jenkins wants to restart?
Note: Possibly related issue - when we run these long jobs, Jenkins often complains about “more SCM polling activities scheduled than handled”. It’s just polling a BitBucket/Git server that is remote.
Is there any way to see a list of the outstanding polling events?

1 Like

are you running your builds on the built-in agent means directly on the controller?

yes, these builds are run on the main Jenkins box.

It’s now considered as a bad idea to run jobs on the built-in agent.
Please have a look at : Clarification about what can run on agents vs controller? - #2 by halkeye

Ok, so we were finally able to move all builds off to Agents.
This issue just happened again while 3 agents were running different builds.
ALL 3 AGENTS WERE STOPPED BY THIS!
It’s even more frustrating in that all 3 agents reported the build was successful, but because of Jenkins forcing them to stop after reporting success - the builds don’t get marked as successful, so other depending jobs won’t use them.

So, it seems that if Jenkins is seeing some error to trigger this, it should be logging it SOMEWHERE. Or it’s just a major Jenkins bug.
Note that I do keep our Jenkins up to date.

So, nobody has a clue as to why Jenkins periodically hangs - taking all agents with it?

I suspect that others are not seeing the issue that you’re seeing. That may indicate that there is something specific about your environment that is different from the environments that are used by others. “How to report an issue” offers some ideas of the type of information that can increase the chances that others will try to help you diagnose the problem that you are seeing.

I’ve not seen a similar report in the Jenkins issue tracker. You might search the issue tracker to see if you can find issues that look similar to yours. They might guide your investigation.

You could consider enabling a logger at the FINE level for the jenkins.management.ShutdownLink class in case a user is requesting that shutdown through a click of the web link.

More info on using Jenkins logging is in the “Viewing logs” page at

I echo the comment from @halkeye where he said:

Likewise, I’ve not heard of Jenkins attempting to shutdown when it was not requested by a user or by a system groovy script run through the script console or by an operation launched through the REST API or through the command line interface.

Well, it’s still happening. I’ll probably end up rebuilding the Jenkins server from scratch.
One thing that does seem like a bug though, is this:

**14:22:07** BUILD SUCCESSFUL - at 4/28/23, 2:22 PM **14:22:07** Total time: 116 minutes 9 seconds **14:22:07** Pausing (Preparing for shutdown)

This is in an Agent. So the Agent pauses, AFTER the job completes, waiting for Jenkins to shutdown. Meanwhile, the Jenkins master is sitting there in the “going to shutdown” state - waiting for the Agent to finish.
Deadlock.

So far, the only way I’ve found out of this situation is to stop Jenkins, Stop the Agent(s), and then restart all of them.

The" preparing for shutdown" can be triggered (manually from the manage page (…/manage/prepareShutdown)), there it can also get canceled again) in anticipation of a real shutdown. The shutdown has then to be initiated in an other way. I would make sure no auto-updates (via ansible, puppet, chef etc) or k8s-magics are involved here.

Another idea is that someone put something in the build i an attempt to get an exclusive run?

No manual shutdown has been triggered, we don’t have any auto-updates, and it’s just a normal build that runs quite often. Just every now and then, it decides to go into shutdown/deadlock mode.
Oh, and if you cancel the shutdown, it pauses a few seconds, and goes back into shutdown mode.

Monitor your jenkins logs for calls to the urls mentioned here:

We are seeing something similar. We have been able to narrow in a little on where the hang occurs (even though it is intermittent and somewhat rare). It happens inside a Jenkins “bat” command, after everything in the command has executed but before the bat closes (or maybe so soon after it closes that the “echo” on the next line never happens).
FWIW, it’s in a scripted pipeline, not declarative.

In my case, all our Jobs are Pipelines that invoke Ant scripts (which often invoke shell scripts)
The ant script completes - and that is when the hangs occur (sometimes).