Hi,
I’ve got the problem with configuration for specific use case.
In my configuration I’ve got 4 node’s configured to build my projects:
- frm01
- frm02
- frm03
- frm04
I created a pipeline, which looks like:
pipeline {
agent { label 'frm01 || frm02 || frm03 || frm04' }
options {
gitLabConnection("gitlab")
}
stages {
stage("TEST") {
steps {
gitlabCommitStatus(name: "TEST") {
build job: "JOB_1", parameters: [
string(name: "nodeName", value: "${env.NODE_NAME}")
]
build job: "JOB_2", parameters: [
string(name: "nodeName", value: "${env.NODE_NAME}")
]
build job: "JOB_3", parameters: [
string(name: "nodeName", value: "${env.NODE_NAME}")
]
build job: "JOB_4", parameters: [
string(name: "nodeName", value: "${env.NODE_NAME}")
]
}
}
}
}
}
Jobs are simple, for example: JOB_1
echo "JOB_1"
sleep 30
echo "Sleep END"
I would like to have a situation where Jenkins decomposes the Pipelines depending on the availability of resources. The problem occurs when many tasks are started at once. Then the pipelines appear in the queue waiting for a free executor and then starts in the executor that should be reserved for the execution of the JOB. I this situation the JOB is blocked. Does anyone have any idea how to solve this issue/use case?
Comments:
- All JOBs must be executed one by one on the same machine - thats why i use Label nodeName
- Jobs in real case are more complicated - that’s why they are outside pipeline
- I’ve already tried to use lock resource, but its impossible to obtain lock before pipeline starts - env.NODE_NAME is not accessible from pipeline { options }. When I moved it to stages it didn’t help because pipeline starts up and blocks executor anyway.
Best regards,
Pawel