we are having a rough time w/ jenkins.. Every 3 days or so our jvm that is set at 50gb gets consumed and we run out of memory. We suspect a plugin but how in the world to we figure out which plugin ? We are running jdk17 on the controller and all the agents have jdk17 as well.
Welcome back, @kaisers0s3.
If you’re trying to track down potential memory leaks in Jenkins, especially those introduced by plugins — there are a few approaches you could try:
- Enable GC logging
You might want to start Jenkins with extra JVM flags to capture garbage collection behavior, e.g.:
-Xlog:gc*:file=jenkins-gc.log:time,uptime,level,tags
That way you’ll have a timeline of how memory is being reclaimed (or not).
- Capture heap dumps
When Jenkins is about to run out of memory, a heap dump can be invaluable. You could trigger one manually with:
jmap -dump:live,format=b,file=heapdump.hprof <PID>
Or, if you’d rather have it happen automatically, add:
-XX:+HeapDumpOnOutOfMemoryError
to your JVM options.
- Analyze the dump
Tools like Eclipse MAT or VisualVM can help you spot what’s holding onto memory. Things to pay attention to:- Dominator tree → which objects dominate the heap?
- Retained sets → are certain plugin classes/packages keeping references around?
- Check the Jenkins logs
Sometimes you’ll see warnings or stack traces that point to misbehaving plugins. - Experiment with plugins
If a specific plugin shows up in the heap dump as the owner of a large object graph, you could try disabling it temporarily or updating it to the latest release to see if the leak improves. - Add runtime monitoring
The Metrics plugin is handy for getting visibility into memory trends over time. - Stay current
Running the latest Jenkins LTS and plugin versions often helps; many memory leak fixes come in quietly via updates.
In short: heap dump analysis tends to be the most direct path to finding a leaking plugin. When you look at the biggest retained objects, plugin package names are often a giveaway.