I don’t know if there’s a recommended way. I know the cloudbees product has more HA support. You could use something like syncthings or duplicity or something to keep two disks in sync (jenkins doesn’t work great on NFS and nfs would be a failure point), then if you switch over instances, tell the new instance to read from disk (HUP probably or an API call). I’m sure there’s blogs about it.
Pretty much push configs to the instances via:
https://plugins.jenkins.io/job-dsl/ - Code definitions for job (or just use org scan folders + jenkinsfiles)
GitHub - jenkinsci/plugin-installation-manager-tool: Plugin Manager CLI tool for Jenkins - being able to update/mange plugins via cli
https://plugins.jenkins.io/configuration-as-code/ ← all other config as code
Otherwise you could sync job/$jobname/config.xml between instances
If i have a single folder with all the jobs, what/why shall i synchronize it with (syncthings) or duplicate (duplicity) it from?
We have multiple multibranch and pipeline jobs that base on jenkins files stored in (Build Configuration) git. However our jobs themselves are (defined in UI) not defined in programmatic form (like ‘Job DSL’ plugin explains). Do you recommend to start defining the jobs themselves in a programmatic form?
Clarification: Job DSL plugin is already installed in our instance.
Can ‘sync’ synchronize the data between disks of different instances (doesn’t it synchronize single instance disk with same instance memory)?