I have a Jenkins Pipeline job which gets triggered from Gitlab via the webhooks. Whenever I run a job which involves changing the Jenkinsfile, it seems to reset the Gitlab connection and also the Build triggers section. That means, apart from the settings, it also looses the secret token and I have to regenerate it and re-establish the webhooks connection every time that happens.
What could be the reason for this and how do I resolve this?
My edited Jenkinsfile looks like below.
#!/usr/bin/env groovy
// Abort previous build.
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
This sounds vaguely like my problem, only mine is forgetting the parameterized cron expressions. I haven’t been able to find any pattern to mine though, it’s totally random and doesn’t have anything to do with changes in the repo. I haven’t gotten any responses to my question either. Good luck.
Maybe you are mixing two forms of job configuration? The Pipeline definition in the Jenkinsfile should define the complete job, including build triggers and other configuration. If the build triggers and the GitLab connection configuration are not defined in the Jenkinsfile, then changes to the Jenkinsfile may remove those job configuration settings that you applied from the Jenkins job configuration page.
But, my Jenkinsfile does not have any triggers block or gitlab connection in the options block. You can see the edited Jenkinsfile in my first post. Do you think some of my Jenkinsfile still has any other Gitlab specific job configuration which could be potentially causing this issue?
I have been following the official guide from Gitlab for my webhooks configuration.
That was exactly my concern. A Pipeline job defines the job as code using the Jenkinsfile. If you are also using the job configuration page to adjust the job definition from a web browser and are not placing those adjustments in the Jenkinsfile, then those adjustments may be lost the next time the Pipeline runs.
This seems like a useful pipeline pattern, and I’ve actually been using it successfully for many years…so much so that I assumed that this was normal and common.
To call out specifically what I’m talking about, I’m referring to defining the pipeline in a Jenkinsfile, but omitting the build trigger information. That trigger information is then defined via the web UI.
This allows us to use a common Jenkinsfile for many pipelines that differ only in how they are triggered.
This actually works just fine, EXCEPT in the case when a Job DSL seed job is used to create a pipeline (and the Job DSL seed job is also adding the build trigger). What happens is, the created pipeline runs fine the first time, and then the build trigger gets cleared, so it never runs automatically again.
I wonder if there is a feasible workaround for this? For example, some way in the Jenkinsfile to say “please do not touch the Build Triggers when syncing the pipeline with the Jenkinsfile after running”.
It’s odd that this is needed, though, since if the pipeline was just manually created (i.e. NOT with Job DSL), all of this works as I’ve been expecting.
The only “workaround” is to switch from a Declarative pipeline to a Scripted one. Since you already use Job DSL to define your triggers, parameters and whatnot, probably the only major feature you’d lose in such transition would be restart from stage. Most of the remaining differences would boil down to just syntax changes.
What is the workaround exactly, other than to switch to a scripted pipeline?
I forgot to mention that I actually am already using scripted pipeline Jenkinsfiles, and these Jenkinsfiles deliberately don’t have any build triggers defined, so I would expect build triggers to be ignored when jenkins syncs the configuration with the Jenkinsfile after running…and this is indeed the behavior I see if I create my pipeline manually via the Jenkins UI.
It only behaves differently if I use a Job DSL seed job to create the pipeline.
Welp, this definitely is a suprise to me! My “solution” intended to literally switch from the pipeline { agent any ; steps { foo syntax to node() { foo.
As Mark mentioned above, the things that a (declarative) pipeline does under the hood may override whatever was set in the job definition – so I assumed that in your case a pipeline call with a missing triggers block was somehow nullifying the existing setting.
I shamefully missed your original mention of “if the pipeline was just manually created not with Job DSL, all of this works”. But now that you mention that you already use the scripted approach, I am at a loss. I have been using DSL + scripted pipelines for several years at two sites and have never encountered such issue.
Does it happen with any job and any pipeline contents, or only with some specific one? If former – can you maybe provide a minimal example of it, together with your plugins list maybe? It would be most curious to me to poke at it, even if purely from a user standpoint.
This is the Jenkinsfile that drives the created pipeline, checked into a git repo under the branch “dsl_minimal”:
node('generic-x86') {
properties([
parameters([
booleanParam(name: 'SOME_PARAM',
defaultValue: false,
description: 'To repro issue, it is important that there is a parameter defined.')
])
])
echo 'echo Hello World'
}
To reproduce the issue:
Run seed job to create “Pipeline”.
See “Pipeline” running as scheduled (every 3 minutes).
Run seed job again to update “Pipeline” (note that we are not expecting any actual updates).
Watch “Pipeline” run once more and then never again.
Note that it seems important that the Jenkinsfile-based pipeline has a parameter defined.
I think this is related to workflow-multibranch-plugin/src/main/java/org/jenkinsci/plugins/workflow/multibranch/JobPropertyStep.java at baf91e49b3f770d92db683d0eaaa81c39c2c5ac0 · jenkinsci/workflow-multibranch-plugin · GitHub
This adds a JobPropertyTrackerAction. When this action is not set then it will remove all other properties before adding new ones, otherwise it will only remove those that have been set before.
So when you regenerate the job with jobDSL, that action is lost and the next run will remove the trigger. But when you manually add the trigger after the job has run once and added the parameter that action is not removed.
So I would assume when you create a new job from scratch, add a trigger and then run the pipeline that adds the parameter, the the trigger is also lost.