I have been using jenkins pipeline for years now and was wondering if there is anything better than my current workflow. When I start a new pipeline,
Create the jobs dsl groovy with the parameters to the pipeline
declarative does not parse the parameters correctly for new pipelines.
Commit, push, run seed jobs
Edit pipeline library to create new step or helper functions
Edit jenkinsfile in Vim/Neovim to use new functions or debug print
commit push pipeline lib, pipeline, and any other dependent git repos
ssh to controller and run job, read output
Fix syntax errors and typos
Repeat last 4 steps over and over.
Basically, is there a way to edit pipeline and pipelines library and test without having to commit to source control and push? Things would go so much smother if I didn’t have to wait around for git network operations. In a busy shop like my work, network can get slow sometimes.
For small edits, there’s a link on the sidebar for replaying job, which lets you edit the pipeline script. lets you do quick edits that are not committed.
Last advice when using jenkinsfile-runner (this is explicitly warned against in Pipeline Best Practices) is to override built-in steps in another library which is loaded along side the library which is being tested, I do this sometimes to mock sh or plugin steps. It’s brittle, It’s hard to maintain, but I found it helpful on occasions, and it was good to get off my chest
Lots of people have already dropped some valuable knowledge in this thread, but let me give some of my highly opinionated and often controversial opinions.
I try not to use parameters at all and lots of them to me is a code smell. This is going to depend a lot on the type of jobs you have, but for IasC or CasC, the only parameter I typically have is log level. Most other CI tools like GitHub Actions, GitLab CI, Travis etc, don’t have parameters.
For local development I often just hit replay over and over again. Sometime using my IDE and sometime just using the web interface on a local docker jenkins.
Most of the stuff I do involves infra, so I can’t even test locally, I just use our production jenkins and do the replay game.
For our jenkins-job repo we have automation that merges the feature branch with the main branch and then seeds it. It works okay for one person, but the second someone else comes along it wipes their job out. I have been meaning to make it merge all feature branches and seed that, but its low priority.
Mad respect to the wizards that still use vi and vim. If you want a little more help, like intellisense and auto-completion, you can try out vscode and the Jenkins Extension Pack - Visual Studio Marketplace.
This is where the IDE comes in. I don’t have this problem anymore.
This is pretty slick. What about on a normal Jenkins? My main development Jenkins is a docker container, I only use runner for CICD.
If I configure my Jenkins container for file SCM, do I need to restart the container when I modify my library code, or will the changes take affect next time I run a job?