Use parallel to run another parallel stages

Hi Community,

I have one type of test(tlc test), and it has two stages that are setup and run runs in parallel. Now, I want to add another type of test(qlc test) and try to reuse the existing setup and run job. However, when I try to run this, only one type of test gets bootup instead of two.

Here is my test script:

def modules = ["tlc","qlc"]
def failFast = false
def builds = [:]
for(module in modules){
    def item = module+"-test"
    println("${item} is running")
    builds[module] = {
        
            def run = [:]
            
            run[module+"-setup"] = {
                build job : "down1",parameters:[
                    string(name: 'type', value: module)
                ]
            }
            
            run[module+"-test"] = {
                build job : "down2",parameters:[
                    string(name: 'type', value: module)
                ]
            }
            run.failFast = true
            parallel run
    }
    
}

builds.failFast = failFast
parallel builds

Pipeline result:
As you can see, the downstream job only bootup down1-2 #9.

11:44:55  tlc-test is running
11:44:55  [Pipeline] echo
11:44:55  qlc-test is running
11:44:55  [Pipeline] parallel
11:44:55  [Pipeline] { (Branch: tlc)
11:44:55  [Pipeline] { (Branch: qlc)
11:44:56  [Pipeline] parallel
11:44:56  [Pipeline] { (Branch: qlc-setup)
11:44:56  [Pipeline] { (Branch: qlc-test)
11:44:56  [Pipeline] parallel
11:44:56  [Pipeline] { (Branch: qlc-setup)
11:44:56  [Pipeline] { (Branch: qlc-test)
11:44:56  [Pipeline] build (Building down1)
11:44:56  Scheduling project: down1
11:44:56  [Pipeline] build (Building down2)
11:44:56  Scheduling project: down2
11:44:56  [Pipeline] build (Building down1)
11:44:56  Scheduling project: down1
11:44:56  [Pipeline] build (Building down2)
11:44:56  Scheduling project: down2
11:45:05  Starting building: down2 #9
11:45:05  Starting building: down2 #9
11:45:05  Starting building: down1 #9
11:45:05  Starting building: down1 #9
11:45:15  Build down1 #9 completed: SUCCESS
11:45:15  Build down1 #9 completed: SUCCESS
11:45:15  [Pipeline] }
11:45:15  [Pipeline] }
11:45:25  Build down2 #9 completed: SUCCESS
11:45:25  Build down2 #9 completed: SUCCESS
11:45:25  [Pipeline] }
11:45:25  [Pipeline] // parallel
11:45:25  [Pipeline] }
11:45:25  [Pipeline] }
11:45:25  [Pipeline] // parallel
11:45:25  [Pipeline] }
11:45:25  [Pipeline] // parallel
11:45:25  [Pipeline] End of Pipeline
11:45:25  Finished: SUCCESS

Using for-in loops in groovy-cps is a known problem IIRC. I would re-write this using collectEntries to populate the builds map instead:

def modules = ["tlc","qlc"]
def failFast = false
def builds = modules.collectEntries { module ->
    return [(module): {
        def item = module+"-test"
        println("${item} is running")
        
        def run = [:]
        run[module+"-setup"] = {
            build job : "down1",parameters:[
                string(name: 'type', value: module)
            ]
        }
            
        run[module+"-test"] = {
            build job : "down2",parameters:[
                string(name: 'type', value: module)
            ]
        }
        run.failFast = true
        parallel run
    }]
}
builds.failFast = failFast
parallel builds
2 Likes

Thanks, @stuartrowe

This really solves my problem.

@stuartrowe Is there any benefit using collectEntries instead of each or are they similar in their behavior?

I think collectEntries simplifies building the builds map that is used as the argument to the parallel step call. You could also iterate over the list with each and insert entries into a map declared earlier.