Dynamic multi stage not executed completely

Jenkins Version: Jenkins 2.440.1

My idea was to create some sort of dynamic backup process, that automatically detects new deployed server and adds them to a script.
This script consists of 3 different parts to backup: server, database, mailbox.
While server-backup is for ALL, the others are only valid for server with having the respective role.
Please consider this information as replaceable, as it is not about the backup script itself, but the triggering jenkins script. So you can also say, I have 3 different scripts in place, which are executed on several nodes and some of them on the same node.

Here is the script in it current state:

def STAGESLIST = []
def AGENTS = [:]
AGENTS["server"] = ['DERSAPP001','DERSAPP002','DERSAPP003','DERSAPP004','DERSAPP005','DERSAPP006','DERSAPP007','DERSAPP008','DERDAPP003','DERDAPP006']
//AGENTS["server"] = ['DERSAPP001']
AGENTS["database"] = ['DERSAPP002']
AGENTS["mailbox"] = ['DERSAPP005']
for (THISBACKUP in AGENTS)
for ( THISAGENT in THISBACKUP.value ) {
        address = InetAddress.getByName(THISAGENT)
        timeoutMillis = 1000
        if ( address.isReachable(timeoutMillis).equals(true) ) {
          println "addressing ${THISAGENT} for ${THISBACKUP.key}"
                    STAGESLIST << [THISAGENT,THISBACKUP.key]
        }
}
def generateStage(nodeLabel, bktype) {
    println "${nodeLabel} ${bktype}"
    return {
        stage("${bktype}-Backup on ${nodeLabel}") {
            node(nodeLabel) {
                checkout scm
                    sh "chmod +x ${workspace}/files/backup/${bktype}-backup.sh"
                    sh "sudo ${workspace}/files/backup/${bktype}-backup.sh"
            }
        }
    }
}

def parallelStagesMap = STAGESLIST.collectEntries {
  ["${it[0]}" :  generateStage(it[0], it[1]) ]
}



pipeline {
      options {
      timeout(time: 6, unit: 'HOURS') 
  }
    agent {label 'MASTER'}
    stages {
        stage ('Parallel Stages') {
            steps {
                script {
                    
                    parallel parallelStagesMap
                }
            }
        }
 
    }
}


You can see, I have adapted the “prallelStagesMap” from other scripts to generate different stages on demand.
For the example you can see, that DERSAPP002 and DERSAPP005 have one role backup and the regular server backup.

Now for the issue:
All jobs targeting just one node are triggered fine and result in success.
But for the two mentioned nodes only the role job is executed. The server backup is not but hangs in the pipe until timeout.
From the Jenkins Log, you can also see, that all stages are discovered:

[Pipeline] Start of Pipeline
[Pipeline] echo
addressing DERSAPP001 for server
[Pipeline] echo
addressing DERSAPP002 for server
[Pipeline] echo
addressing DERSAPP003 for server
[Pipeline] echo
addressing DERSAPP004 for server
[Pipeline] echo
addressing DERSAPP005 for server
[Pipeline] echo
addressing DERSAPP006 for server
[Pipeline] echo
addressing DERSAPP007 for server
[Pipeline] echo
addressing DERSAPP008 for server
[Pipeline] echo
addressing DERDAPP003 for server
[Pipeline] echo
addressing DERDAPP006 for server
[Pipeline] echo
addressing DERSAPP002 for database
[Pipeline] echo
addressing DERSAPP005 for mailbox

Running down the log, I can see that the jobs are started in inversed order. But both “server” jobs on 002 and 005 are not triggered, after the respective role jobs are finished.

The goal is, to get those two jobs triggered after the role job has finished.

Does anybody have an idea, what is going wrong here?

Moin (hi people),
while waiting for any hint from this community I did not stop digging and think I found the cause myself.
At least my current branch is running successful.
For those, wondering about why, what, where, here my findings:

def parallelStagesMap = STAGESLIST.collectEntries {
  ["${it[0]}" :  generateStage(it[0], it[1]) ]
}

This part seem to create or at least rely on a named array or something I would call a hashtable, where the first part in front of the “:” is the key, and the second part is the value.
This results for the two corresponding backups in the same node “dersapp002” and “dersapp005” that the “key” is the same, which of course leads to the array element to be overwritten by the last call in the loop.
Primitive example:

>$myarray["dersapp002"]="Hallo Welt!"
>$myarray["dersapp002"]="Tschüss Welt"
>Write-host $myarray["dersapp002"]

Tschüss Welt

>

As I am not the expert in Java or Groovy I just did this little change for a rough test:

def parallelStagesMap = STAGESLIST.collectEntries {
  ["${it[0]}${it[1]}" :  generateStage(it[0], it[1]) ]
}

This now results in the primitive:

>$myarray["dersapp002Server"]="Hallo Welt!"
>$myarray["dersapp002Database"]="Tschüss Welt"
>Write-host $myarray["dersapp002Server"]

Hallo Welt!

>Write-host $myarray["dersapp002Database"]

Tschüss Welt

>

Of cause this approach might not be very “elegant” but it works for me.

So you might assume this for “solved” and archived for future alien scientists :wink:

thanks for your time
C.