Thank you @zspec!
I totally agree with you that adhering to best practices is important and eases the path for future maintenance and migration of Jenkins installations.
My use case might be a bit different than most typical DevOps/CI/CD installations of Jenkins as we have dug deep into the automation server capabilities for executing analytics (read R, Python algorithms), HPC image analysis with CellProfiler and Deep Learning algorithms, managing laboratory data (ETL, DB loading etc.), monitoring the performance of laboratory robotics and more such life/data science tasks. You can read some of this here Novartis/Jenkins-LSCI: Jenkins for Life Science Continuous Integration (github.com) as well as in the original scientific literature when we published a paper back in 2016 Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform - Ioannis K. Moutsatsos, Imtiaz Hossain, Claudia Agarinis, Fred Harbinski, Yann Abraham, Luc Dobler, Xian Zhang, Christopher J. Wilson, Jeremy L. Jenkins, Nicholas Holway, John Tallarico, Christian N. Parker, 2017 (sagepub.com)
Nonetheless, the main migration issues I had to overcome were the ‘security hardening’ of Jenkins that made the integration and interaction of so many external tools a challenge, coupled with the advances in security applied to my own company’s internal systems (read use of CyberArk/Conjure plugin for Jenkins).
The third issue I faced was the deprecation and poor maintenance of certain Jenkins plugins that made them incompatible with recent LTS versions of Jenkins. In most cases I was able to find a replacement plugin(Publish Over SSH plugin for SSH plugin), but in the end, this remains an ‘imminent danger’ with every successive release of Jenkins. There are several ‘orphaned’ plugins that are still providing key functionality to our workflows. (such an example is the ‘Associated Files’ plugin that we use to maintain links between Jenkins builds and large artifacts (such as image sets of several 100Gbs) on our HPC cluster storage.
Finally, my unique situation prevents us from deleting most job builds (as they represent archival artifacts of laboratory analyses that generate scientific hypotheses and follow up actions) The migration included at least 2Tb of analytical data (read build artifacts) that needed to be restored relationally in the upgraded Jenkins installation. I developed custom Groovy and PowerShell scripts that used the in-memory Jenkins object model to discover the linked data (inputs of an analysis are usually the artifacts of an upstream job) and copy them to the new servers.
Although I can’t post my presentation without legal approval from my company, I hope that this summary gives you, and other fellow developers (or data scientists) an idea of the challenges involved and workaround to overcome them.
I must say that having on boarded a Jenkins LTS that included most of the UI and security updates (since v2.222.1), I feel that future upgrades will be a lot easier. Finally, I can’t overstate the contribution of the Jenkins community during this migration. I got much valuable input by posting and discussing some of these challenges on this forum and my longtime collaborator and contributor on the Active Choices plugin @kinow