Configuration as code - Jenkins Helm Chart

I am running Jenkins in K8s cluster, installed via helm chart.
In my helm chart there is a section where I define the CasC, the config below works well.

cloud-config: |
        jenkins:
          clouds:
          - kubernetes:
              containerCap: 10
              containerCapStr: "10"
              jenkinsTunnel: "jenkins-2480-agent.jenkins-2480.svc.cluster.local:50000"
              jenkinsUrl: "https://XXXXX.com/"
              name: "kubernetes"
              namespace: "jenkins-2480"
              podLabels:
              - key: "jenkins/jenkins-jenkins-agent"
                value: "true"
              serverUrl: "https://kubernetes.default"
              templates:
              - containers:
                - envVars:
                  - envVar:
                      key: "JENKINS_URL"
                      value: "https://jenkinsdev-XXXX.com/"
                  image: "XXXXXX"
                  livenessProbe:
                    failureThreshold: 0
                    initialDelaySeconds: 0
                    periodSeconds: 0
                    successThreshold: 0
                    timeoutSeconds: 0
                  name: "jnlp"
                  privileged: true
                  resourceLimitCpu: "2"
                  resourceLimitEphemeralStorage: "50Gi"
                  resourceLimitMemory: "16Gi"
                  resourceRequestCpu: "1"
                  resourceRequestEphemeralStorage: "30Gi"
                  resourceRequestMemory: "8Gi"
                  ttyEnabled: true
                  workingDir: "/home/jenkins"
                id: "XXXXX"
                imagePullSecrets:
                - name: "XXXXX"
                label: "kubeagent"
                name: "kubeagent"
                podRetention: "never"
                serviceAccount: "default"
                slaveConnectTimeout: 300
                slaveConnectTimeoutStr: "300"
                workspaceVolume:
                  persistentVolumeClaimWorkspaceVolume:
                    claimName: "jenkins-2480-slave"
                    readOnly: false
                yamlMergeStrategy: "override"

Now I added kaniko container to my setup for docker image building. I added it via UI and it works well. I now wish to configure it in the CasC so I go to plugin and click View Configuration, this gives me yaml file (selected section below)
When I add it and restart my pod, in the UI only kaniko container is visible and the jnlp is deleted.
I have been breaking my head on how to configure multiple containers in one config for few hours and just cant get it to work. I dont get why simply replacing existing config with one generated by jenkins casc doesnt work ;/

  clouds:
  - kubernetes:
      containerCap: 10
      containerCapStr: "10"
      jenkinsTunnel: "jenkins-2480-agent.jenkins-2480.svc.cluster.local:50000"
      jenkinsUrl: "https://jenkinsdev-XXXXcom/"
      name: "kubernetes"
      namespace: "jenkins-2480"
      podLabels:
      - key: "jenkins/jenkins-jenkins-agent"
        value: "true"
      serverUrl: "https://kubernetes.default"
      templates:
      - containers:
        - envVars:
          - envVar:
              key: "JENKINS_URL"
              value: "https://jenkinsdXXXXXcom/"
          image: "XXXXX"
          livenessProbe:
            failureThreshold: 0
            initialDelaySeconds: 0
            periodSeconds: 0
            successThreshold: 0
            timeoutSeconds: 0
          name: "jnlp"
          privileged: true
          resourceLimitCpu: "2"
          resourceLimitEphemeralStorage: "50Gi"
          resourceLimitMemory: "16Gi"
          resourceRequestCpu: "1"
          resourceRequestEphemeralStorage: "30Gi"
          resourceRequestMemory: "8Gi"
          ttyEnabled: true
          workingDir: "/home/jenkins"
        - args: "9999999"
          command: "sleep"
          envVars:
          - envVar:
              key: "AWS_SDK_LOAD_CONFIG"
              value: "true"
          - envVar:
              key: "AWS_EC2_METADATA_DISABLED"
              value: "true"
          image: "gcr.io/kaniko-project/executor:v1.23.2-debug"
          livenessProbe:
            failureThreshold: 0
            initialDelaySeconds: 0
            periodSeconds: 0
            successThreshold: 0
            timeoutSeconds: 0
          name: "kaniko"
          workingDir: "/home/jenkins"
        id: "XXXXX"
        imagePullSecrets:
        - name: "XXXXX"
        label: "kubeagent"
        name: "kubeagent"
        podRetention: "never"
        serviceAccount: "default"
        slaveConnectTimeout: 300
        slaveConnectTimeoutStr: "300"
        workspaceVolume:
          persistentVolumeClaimWorkspaceVolume:
            claimName: "jenkins-2480-slave"
            readOnly: false
        yamlMergeStrategy: "override"

You have my deepest sympathies. The CasC integration with the helm charts is a massive PITA as you embed yaml inside yaml.

I would go an look at the controller’s home directory and check the ~/casc_configs folder. One of the issues with the helm chart is that these casc sections might end up with different file names inside the persistent volume and AFAIK the chart will never remove obsolete files. So you might end up with cloud-config.yaml with one config and an other new-cloud-config.yaml, then depending on which order they are read the second one rolls back the settings of the first one.

The way how multiple config files are read and applied and what is merged or simply overwritten is not super clear either. I would make sure that you have a single file with a clouds: entry inside the casc_configs folder and delete the old ones you do not need, if any.

BTW you say you added the update CasC, is that to the helm chart, and did you push the updated chart before restarting the pod? If not the old chart will rewrite the old yaml to the persistent volume when the pod is restarted.

Actually I was able to solve the problem - I had to fix the yaml.
Do you know how to set up casc using remote repository?

I am aware that configUrls needs to be set but I wonder what other elements need to be configured for it to work i.e. whether I need to change ENV variables in values.yaml

Sorry I do not have any experience with using remotely hosted CasC configs.

We use ArgoCD to manage our controllers charts. I do not necessarily recommend it because it is quite cumbersome to manage vs just running the charts from a jenkins pipeline. It has some neat features though.

Gotcha,
We also use ArgoCD as part of our CI/CD :slight_smile:
Do you use CasC? - if yes, how did you integrate it in Jenkins to be the least painful ?

We do use CasC but not for everything such as secrets. We avoid putting secrets directly on our controllers but we do need a few for things to work.

I don’t have the exact details as an other team member put most of this in place. I’m not an expert with Helm charts myself.