Bad file descriptor while using jenkins files from gitlab

Hi team,

It is the 3rd time I am seeing the following error:
error: file write error: Bad file descriptor
fatal: unable to write loose object file
fatal: unpack-objects failed

It happens only if the pipeline is executed from SCM. Any idea how this can be fixed, beside the obvious - move the pipeline inside jenkins configuration.
I know we are running quite old Jenkins version (2.319.3), but at this moment of time it can not be upgraded. Still the issue happens from time to time. Once happened restarting the jenkins server fixed it, but it did not work this time.

Thank you!

This doesn’t appear to be a Jenkins problem. It looks like a host misconfiguration with open file handle limits.

Review your file handle limits and configure them for Jenkins user on your host.

Alternately, I web searched your literal error and found it may be a Git misconfiguration. Git push - fatal: write error: Bad file descriptor - Stack Overflow

You haven’t shared much info on how you encountered the error. Share more info about what you’re doing to encounter the error; commands, configuration, etc

Hi Sam, thank you for your feedback.
Here is the full log:
Started by user ***
hudson.plugins.git.GitException: Command “git fetch --tags --force --progress --prune – origin +refs/heads/test2:refs/remotes/origin/test2” returned status code 128:
stderr: remote: Enumerating objects: 14, done.
remote: Counting objects: 7% (1/14)
remote: Counting objects: 14% (2/14)
remote: Counting objects: 21% (3/14)
remote: Counting objects: 28% (4/14)
remote: Counting objects: 35% (5/14)
remote: Counting objects: 42% (6/14)
remote: Counting objects: 50% (7/14)
remote: Counting objects: 57% (8/14)
remote: Counting objects: 64% (9/14)
remote: Counting objects: 71% (10/14)
remote: Counting objects: 78% (11/14)
remote: Counting objects: 85% (12/14)
remote: Counting objects: 92% (13/14)
remote: Counting objects: 100% (14/14)
remote: Counting objects: 100% (14/14), done.
remote: Compressing objects: 12% (1/8)
remote: Compressing objects: 25% (2/8)
remote: Compressing objects: 37% (3/8)
remote: Compressing objects: 50% (4/8)
remote: Compressing objects: 62% (5/8)
remote: Compressing objects: 75% (6/8)
remote: Compressing objects: 87% (7/8)
remote: Compressing objects: 100% (8/8)
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 5), reused 0 (delta 0), pack-reused 0
error: file write error: Bad file descriptor
fatal: unable to write loose object file
fatal: unpack-objects failed

at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$500(
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(
at jenkins.plugins.git.GitSCMFileSystem$
at jenkins.scm.api.SCMFileSystem.of(
at jenkins.scm.api.SCMFileSystem.of(
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(
at hudson.model.ResourceController.execute(

Finished: FAILURE

It happens only if jenkins pipeline is configured to use SCM. If the pipeline is written inside the config - everything is working. So it is quite ambiguous.

Changing the branch where from the files are pulled - fixes the issue, If you go back to the old branch that was broken and run the job again - fails with same error. If you go back again to the new branch that was working - it starts to fail as well.
I suspect that may be there is some cache in the jenkins_home that may be causes this. Any idea what possible could be cleared so this can be permanently resolved?