How to lock the entire job after agent assignment but before stages in Jenkins pipeline?

Hello,

I have a long-running job that uses a single agent with a custom workspace, and it uses the disableConcurrentBuilds() option because two jobs would interfere with each other when using the same workspace. I need to use a custom workspace because its size is several hundred gigabytes, and checking it out from SCM takes a long time. To avoid this, I reuse the same workspace for all runs of this job.

Recently, I added another agent with its own custom workspace (which has the same path as the original agent’s workspace), and I want to be able to run this job concurrently on both agents, but not on the same agent. So, I removed disableConcurrentBuilds() and added resource locking in the options {} block.

I am testing this setup with the following pipeline:

pipeline {
    agent {
        node {
            label "${params.Agent}"
        }
    }
    options {
        lock "label-${env.JOB_BASE_NAME}-${env.NODE_NAME}"
    }
    parameters {
        choice(name: 'Agent', choices: ['agents-label', 'agent01', 'agent02'], description: 'Choose a specific agent or leave the default label for automatic selection.')
    }
    stages {
        stage('Running') {
            steps {
                echo "Starting ${env.JOB_BASE_NAME} #${currentBuild.number} on ${env.NODE_NAME}."
                sleep time: 1, unit: 'MINUTES'
                echo "Finished ${env.JOB_BASE_NAME} #${currentBuild.number} on ${env.NODE_NAME}."
            }
        }
    }
}

When I start the job with the agents-label parameter, it tries to lock label-mytestjob-null, which isn’t correct. I believe this happens because the agent is not assigned when the lock is executed in the options block.

In my original long-running pipeline, there are several stages, and placing steps inside the lock {} block is not an option because the lock would be released between stages.

How can I achieve the desired behavior? Is it possible to lock the entire job, for example, in a script after the agent is assigned but before the stages start?

Thank you in advance for your help.

You could limit the agent to a single executor. This automatically avoids parallel execution of whatever.

I’m afraid these agents are used for other jobs as well, and they require more than one executor.

You could define 2 agents on the same host with different root directories. One agent gets a single executor and a dedicated label. The second agent gets more executors and a different label. Then you can configure your jobs so that the big one runs on the agent with the single executor and the other jobs on the other agent

1 Like

That sounds like a clever workaround for my problem. I’ll check it out and report back. Thanks!