I have a site that I’m deploying via Jenkins and due to the large number of images that’s part of the site, I’ve not checked the ‘Delete Workspace’ checkbox. However upon review, it appears even unchecked, the workspace is deleted. It takes about 20 minutes to rsync the image files into the workspace even though there are only a few new images to be added.
Is there something I’m not understanding both in the delete workspace space or a better way of integrating binary files into a project which is then sync’d with the remote live site.
Can you go into how your testing your assumptions/came to your conclusion? It actually takes lot of effort to delete workspaces, but it may be spinning up a new one.
Sure. I logged into the Jenkins worker and ran a while true; do ls; sleep 3; done and ran the Jenkins job. I watched as the files and directories were deleted over the course of a few refreshes, then the directory structure and files were recreated.
I did enable ‘Delete Workspace’ and watch and all the files were deleted immediately, it didn’t take any time unlike when it was unchecked.
Someone on reddit suggested deleting the plugin which might solve my issue but certainly not the general issue of the delete and recreate.
For a majority of my projects, a delete and recreate is a non issue. But this is a pretty large photo site and it’s taking 20 minutes to run.
Okay, I’ve deleted the Workspace Cleanup plugin however a rebuild of the project as noted in the output below has deleted the workspace directory entirely and then started adding it back in again, so again, a 20 minute process to build the directory structure and incorporate the binaries before syncing the directory with the local site.
So, out of luck then I guess. It just takes 20 minutes to update a site because Jenkins for some unknown reason deletes the workspace. I wonder why there’s a ‘Workspace Cleanup’ plugin then.
At the time the thread dump was taken, a single build is executing and it’s running some sort of script (shell/Batch/…). If this really is when things get deleted, review your job configuration and build script. To narrow it down further, we’d need a thread dump from the agent process.
I suspect that step 4 is causing the problem in combination with JENKINS-22795
By performing step 4, you are assuming that the git plugin will repopulate the .git directory without harming the rest of the repository. There were cases in early use of the git plugin where the failure to empty a directory before a fresh clone would cause problems that were very difficult to diagnose. By removing the .git directory, you’ve caused the git plugin to believe that it must perform a fresh clone. A fresh clone in the git plugin assumes that it has full control of the workspace and empties the workspace so that the initial git clone has the best chance of success.
By removing the .git directory, you’re also assuring that the next run will need to repopulate that directory. Since it contains a complete copy of the history of the repository, that should generally be avoided.
Don’t remove the .git directory. Configure your rsync command to ignore the .git directory and its contents.
That’s an idea. I’m using a similar process with gitlab-ci and it deletes the working directory again as well. One of the folks on my team recommended an entirely different directory with the artifacts as well.
I’ve used rsync with exclude before for other purposes. I’ll give that a try, thanks!
That appears to be the solution. I added the exclude to the rsync and didn’t delete the .git directory and it doesn’t delete the directory any more. Oddly a second exclude or a {} exclude or even an exclude-from doesn’t exclude the .gitlab-ci.yml file from copied. I’ve used exclude-from to not copy several files over in other situations without issue so this is a bit odd. I’ll puzzle it out though.