44 millions of org.jenkinsci.plugins.workflow.support.concurrent.Timeout

We upgraded to jenkins 2.452.4 on jdk17 (from 2.440.2 on jdk 11) and our memory consumption rised from 6GB to 12 GB. We are running approximately 4000 jobs at once, which are in queue from beginning. Analyzing heapdump we noticed 44 millions instances of org.jenkinsci.plugins.workflow.support.concurrent.Timeout. From looking into heapdump I guess those objects are created by jobs waiting in queue. Am I right?

We can’t downgrade because of security policy so can’t compare if this is normal, but does not look normal to me :slight_smile:

I would be thankful for any hint. We continue to diagnose the issue in meantime.

Hello and welcome to this community, @mchoma. :wave:

The significant increase in memory consumption and the large number of org.jenkinsci.plugins.workflow.support.concurrent.Timeout instances suggest that there might be an issue with how Jenkins is handling jobs in the queue.

This could be related to changes in the newer Jenkins version or the JDK version. :thinking:

Here are some steps that could help you diagnose and potentially mitigate the issue:

  • Ensure all Jenkins plugins are up to date. Sometimes, plugin updates include performance improvements and bug fixes.
  • Check if there are any specific configurations or plugins that might be causing excessive memory usage. For example, certain pipeline configurations or plugins might not be optimized for the new Jenkins or JDK version.
  • Temporarily increase the heap size to accommodate the higher memory usage while you diagnose the issue. You can do this by modifying the JAVA_OPTS in the Jenkins startup script: export JAVA_OPTS="-Xms8g -Xmx16g"
  • Use tools like Eclipse MAT (Memory Analyzer Tool) to analyze the heap dump and identify the root cause of the memory consumption. Look for patterns or specific objects that are consuming a lot of memory.
  • If the issue is related to jobs waiting in the queue, consider optimizing the job queue management. This might involve adjusting the number of executors, using job throttling plugins, or distributing the load across multiple Jenkins instances.
  • Tune the garbage collection settings to improve memory management. You can add the following options to JAVA_OPTS: export JAVA_OPTS="-Xms8g -Xmx16g -XX:+UseG1GC -XX:MaxGCPauseMillis=200"

@poddingue Thanks for your response and hints. We still suffer with this issue. We will continue to investigating.

Blind shot. Looking into implementation of Timeout class, I see ScheduledExecutorService is used. Looking for some recent bugs in that class. I found https://bugs.openjdk.org/browse/JDK-8338765. No idea if that could be related just mentioning here for record.

1 Like

Thanks for your feedback, @mchoma. :+1:
It looks like the fix for this issue has been merged, I have yet to find the JDK version it has been incorporated into.

What version of the JDK are you running?