I have small dev server, with 24GB of RAM (debian 11, no GUI) - which is running multiple apps and jenkins (which is RARELY used, to build new app from github etc).
The problem is, that after server restart, when all apps are running it takes about 4-5GB of RAM, but if I just run the jenkins webapp (just log in to the jenkins) it consumes about 12-16 additional RAM (just logged in, no job was running).
I’ve manually set the JAVA_ARGS=“-Xmx4096m” in /etc/default/jenkins, but this didn’t changed anything. Also is that normal, that are dozens of jenkins processes running in background (currently about 47 processes), when only 1 worker is set, and nothing is happening at the moment?
If you installed Jenkins with the apt package and it is a currently maintained version of Jenkins (most recent LTS or most recent weekly), then changes to /etc/default/jenkins are ignored. The file /etc/default/jenkins was used when the Jenkins service was managed by the System V init system.
Jenkins releases since March 2022 have used systemd to manage the service instead of init. If you want to adjust the configuration of the Jenkins service on a Debian system, you’ll need to use systemctl edit jenkins.
More information on editing Jenkins service configurations is available from
It is not typical that a Jenkins controller that is rarely used would be running dozens of processes in the background with only one agent and no job running.
My Jenkins controller has about 40 agents configured with the active agent count ranging from 10 to 40. The controller itself always has the reaper process tini and the Jenkins controller java process running. The most common other processes on my controller are when Jenkins checks the software configuration management systems for changes in the repositories that it is watching. Those checks are almost always git processes that start, run briefly, and then exit.
If you have many processes running on the controller, you may want to check your configuration to understand those processes.
Some users have reported that they mistakenly updated the Jenkins war file in their Debian or RPM based system rather than using the operating system package manager to perform the upgrade. In that case, they had a system that was attempting to start Jenkins from the System V init process but Jenkins itself expected to be started and managed by systemd. That created a few extra processes unexpectedly. That is not a supported upgrade path. If Jenkins is installed with an operating system package manager, it should be upgraded with the operating system package manager.
If you performed the Jenkins upgrade incorrectly by updating the war file instead of using apt update, then you should use apt update to correct the mistake.
Are you confident that you’ve made the correct edits to the Jenkins service unit?
On my Ubuntu 22.04 machine, I installed Jenkins 2.361.1 using the Ubuntu installation instructions. I configured Jenkins and confirmed that top showed that it was using 2 GB of heap. I edited the systemctl override with systemctl edit jenkins and inserted the following changes
# Arguments for the Jenkins JVM
I used systemctl daemon-reload to assure that the new configuration was read, then used systemctl restart jenkins to restart the Jenkins controller.
After the restart, top reported that Jenkins was using less than 512M.
I’ve done this (-XMX512M option), but still doesn’t work for me. Also updated Jenkins to the 2.361.1 version.
This is how it looks like after VM restart (4,5 min after restart) - used >17G of RAM, on “idle” (nothing was built).