Hello Team,
We have currently 9 static agents on our Jenkins controller, and all are identical along with labels as “docker”. The problem is specifically jobs are landing on agent3 and agent4 which is leading to memory issue . Please help us
Hello Team,
We have currently 9 static agents on our Jenkins controller, and all are identical along with labels as “docker”. The problem is specifically jobs are landing on agent3 and agent4 which is leading to memory issue . Please help us
First step i would check is to review your node settings.
If you correctly have set the option under Usage it should read “use this node as much as possible” not “only build jobs with label expression”
And to function properly this setting need to be the same across all 9 nodes.
Else you could tie some jobs to a specific node.
If you really need to randomize you could give each node an unique label (also each node can have several labels at the same time)
And use groovys random function to generate a random number that would “select” the node for you.
Also the official jenkins documentation is really good for reference and learning i can highly recommend reading it.
Managing Nodes (jenkins.io)
how many executors do you have configured on your agents? If it is more than one then it can of course happen that 2 builds run in parallel on an agent.
Are you running freestyle jobs or pipeline jobs? For freestyle jobs I know that Jenkins has some stickiness and tries to always use the same agent as for the previous build.
And do you have a problem with the memory of the jvm of the agent or a general problem that the complete machine runs in to memory problems. If for the jvm you might want to increase the heap with -Xmx
option of java
Memory consumption also depends on what you run on your agents.
We compile a huge C/C++ project and agents have 360GiB memory and we see sporadically oom situations in the compiler calls.
We have 4 executors each configured. We are running multibranch and pipeline jobs.
@twaibel We are using “use this node as much as possible”
if your agents have 4 executors but are not able to handle 4 concurrent builds then you should reduce the number of executors
Hello @mawinter69 @twaibel
Seeking your advice, should we go ahead increase the memory of agent machines or should we go ahead add more agents with 4 executors
first reduce the number of executors on the existing agents. If that leads to too much queueing then go for more agents.
@mawinter69
16 CPU Cores
640 GB Storage
32 GB RAM
1 Volume
Is our current setup , do you still think we have to reduce executors
The question is where do you have the memory problems. Is it the jvm of the agent? Or you have a general memory problem and and the kernel starts to kill processes. If the first then increase the heap of the jvm process, for the latter reduce number of executors.
Hello @mawinter69
The space gets filled up at /var/lib/docker/overlay2
We do daily docker system prune also, thats not helping much.
We tried to add external volume also that also got filled up