Trigger a groovy call in a jenkins library on every pipeline start

We use a gobally defined jenkins library to setup additional data for our pipelines, specifically to tell which container tags to use on our k8s cloud.

To do this all our pipelines have a ‘prepare’ stage that sole job is to clone the git repo, read the yaml file, then the following stages now have the correct information for which containers to use. This works pretty well but we feel this is a waste of time and resources when the data is available on the WORKSPACE of the pipeline in the controller, and we could skip the extra pod and git clone.

AI tools told us that our jenkins library could have a resources/init.groovy file that would get called automatically but there is zero documentation for this and I think this is just hallucination.

An other AI tool mentioned that we could create an init.groovy script on the controller that will be called after the controller gets initialized (that’s documented), and that could have a callback method registered on pipelines start through a RunListener.all().add(new PipelineStartListener()) call.

This might work, but feels a bit hacky.

Is there direct support to execute code on pipeline start without having a Jenkinsfile calling it explicitly?

AI tools told us that our jenkins library could have a resources/init.groovy file […but] I think this is just hallucination.

It is a hallucination.

an init.groovy script […could call] RunListener.all().add(new PipelineStartListener())

Do not do that.

Is there direct support to execute code on pipeline start without having a Jenkinsfile calling it explicitly?

No.

What is your recommendation to help augment the pipelines behavior?

We are specifically looking at a way to ensure that our k8s based pipelines will use the correct container for a given branch (mostly needed so we can test new containers against the code in our main branch, and not impact our release branches when the container on main is updated): the branch decides that with a custom pipeline_conf.yaml that we load.

In our current approach we have a bit of chicken and egg condition where a pipeline stage needs to run without our CI container, perform a git clone of our monorepo, read the yaml, then the data is available to configure the pod yaml definition of the remaining stages.

We are looking at a way to skip this stage and simplify our pipeline template at the same time.

Do you have access to the repo via http? Then you could use the http_request plugin in combination with the readYaml step.

That would at least avoid that you have to clone the repo just to read a single file. You would still need your prepare stage in each pipeline but the approach doesn’t require a workspace.

We already fall back to http requests for other microrepos where we do not want to maintain the config file. The current logic looks for the file at the root of the repo and failing that makes an http request for the current repo (in case the file is just missing in the current branch), then to our main monorepo which always has the config file with the latest tag.

My primary goal here is to reduce the amount of boilerplate we have in our jenkinsfiles while saving the time waited for the k8s based agents to launch, as well as reducing the amount of pulls from the container registry.

I’m going to try something like this:

initializePipeline()

pipeline {
    agent none
    options {
        // ...
    }
    stages {
        stage('CI') {
            agent {
                kubernetes {
                    yamlMergeStrategy merge()
                    yaml getCiPod(mem: '500Mi')
                }
            }
            steps {
                container('ci-container') {
                    sh 'echo CI Container'
                }
            }
        }
    }
    post { always { chuckNorris() } }
}

While trying to solve an other issue, I recently found a way to ‘cache’ data in Jenkins libraries by doing it under the src folder:

class Global {
    static Map cache = ['global': 'cache']
}

Which can then be accessed by a script under the vars folder of the jenkins library for the same pipeline run:

import com.[..snip..].Global
// ....
    def global_cache = Global.cache

The idea is to see if I can get initializePipeline() to access the file on the controller’s workspace since it will have cloned the repo at the current branch if only to read the Jenkinsfile, the config file will be at the root of the workspace and if it can be read (big if, but the groovy code always run on the controller AFAIK, and jenkins libraries might be allowed to access disk), then we can store the data in the pipeline run cache, so that the getCiPod() function will have the correct tag to use.

Of course we do a bunch of other things beyond getting the container tags, but that’s beyond the scope of this thread.

Unfortunately I won’t have much time to do these experiments due to other priorities but I’ll report back on what I figured out.