Problem with disk space on builder node

We have a lot of running jobs in a node and disk space runs out quickly.
We have made a script to remove unnecessary files, but it is sometimes interrupted by an error like this

script:

     sudo docker image prune -a -f --filter "until=1h"
     sudo rm -rf /opt/jenkins/workspace/* && sudo rm -rf /tmp/postgresql-embed*

error:

rm: cannot remove ‘/opt/jenkins/workspace/webapp_release/target’: Directory not empty

How to improve the script to eliminate the error?

Sounds like your deleting the workspace of a running job. That seems bad.

You can for sure use https://plugins.jenkins.io/ws-cleanup/ to have jobs clean up after themselves. I think it has support for a global setting. Its been a while since I used it.

You could just || true to the end of your rm statement so it doesn’t fail. And change your && to ; so it runs no matter what.

1 Like

Also don’t underestimate just adding more space. You can get pretty beefy hds pretty cheap these days. And if you are using the cloud you can use one of the cloud provider plugins to create new instances on demand so you don’t need to worry about disk

We use AWS. We tried to do it, but our team eats up 50-100GB per day and no matter how big the disk is, they will finish it in a couple of days

If you are already using AWS, consider switching to using “EC2 Cloud” configuration to spin up nodes on-demand. As these will be coming up and down(after idle timeout) on a regular basis, you can get away with not needing any cleanup and even if a node manages to run out of space, just delete it and you are back in business.

1 Like

Oh as I was falling asleep last night I realized it’s probably better to use find to find files that were created more than x time ago. Would help prevent deleting in use files

1 Like