I am running Jenkins in K8s cluster, installed via helm chart.
In my helm chart there is a section where I define the CasC, the config below works well.
cloud-config: |
jenkins:
clouds:
- kubernetes:
containerCap: 10
containerCapStr: "10"
jenkinsTunnel: "jenkins-2480-agent.jenkins-2480.svc.cluster.local:50000"
jenkinsUrl: "https://XXXXX.com/"
name: "kubernetes"
namespace: "jenkins-2480"
podLabels:
- key: "jenkins/jenkins-jenkins-agent"
value: "true"
serverUrl: "https://kubernetes.default"
templates:
- containers:
- envVars:
- envVar:
key: "JENKINS_URL"
value: "https://jenkinsdev-XXXX.com/"
image: "XXXXXX"
livenessProbe:
failureThreshold: 0
initialDelaySeconds: 0
periodSeconds: 0
successThreshold: 0
timeoutSeconds: 0
name: "jnlp"
privileged: true
resourceLimitCpu: "2"
resourceLimitEphemeralStorage: "50Gi"
resourceLimitMemory: "16Gi"
resourceRequestCpu: "1"
resourceRequestEphemeralStorage: "30Gi"
resourceRequestMemory: "8Gi"
ttyEnabled: true
workingDir: "/home/jenkins"
id: "XXXXX"
imagePullSecrets:
- name: "XXXXX"
label: "kubeagent"
name: "kubeagent"
podRetention: "never"
serviceAccount: "default"
slaveConnectTimeout: 300
slaveConnectTimeoutStr: "300"
workspaceVolume:
persistentVolumeClaimWorkspaceVolume:
claimName: "jenkins-2480-slave"
readOnly: false
yamlMergeStrategy: "override"
Now I added kaniko container to my setup for docker image building. I added it via UI and it works well. I now wish to configure it in the CasC so I go to plugin and click View Configuration
, this gives me yaml file (selected section below)
When I add it and restart my pod, in the UI only kaniko container is visible and the jnlp is deleted.
I have been breaking my head on how to configure multiple containers in one config for few hours and just cant get it to work. I dont get why simply replacing existing config with one generated by jenkins casc doesnt work ;/
clouds:
- kubernetes:
containerCap: 10
containerCapStr: "10"
jenkinsTunnel: "jenkins-2480-agent.jenkins-2480.svc.cluster.local:50000"
jenkinsUrl: "https://jenkinsdev-XXXXcom/"
name: "kubernetes"
namespace: "jenkins-2480"
podLabels:
- key: "jenkins/jenkins-jenkins-agent"
value: "true"
serverUrl: "https://kubernetes.default"
templates:
- containers:
- envVars:
- envVar:
key: "JENKINS_URL"
value: "https://jenkinsdXXXXXcom/"
image: "XXXXX"
livenessProbe:
failureThreshold: 0
initialDelaySeconds: 0
periodSeconds: 0
successThreshold: 0
timeoutSeconds: 0
name: "jnlp"
privileged: true
resourceLimitCpu: "2"
resourceLimitEphemeralStorage: "50Gi"
resourceLimitMemory: "16Gi"
resourceRequestCpu: "1"
resourceRequestEphemeralStorage: "30Gi"
resourceRequestMemory: "8Gi"
ttyEnabled: true
workingDir: "/home/jenkins"
- args: "9999999"
command: "sleep"
envVars:
- envVar:
key: "AWS_SDK_LOAD_CONFIG"
value: "true"
- envVar:
key: "AWS_EC2_METADATA_DISABLED"
value: "true"
image: "gcr.io/kaniko-project/executor:v1.23.2-debug"
livenessProbe:
failureThreshold: 0
initialDelaySeconds: 0
periodSeconds: 0
successThreshold: 0
timeoutSeconds: 0
name: "kaniko"
workingDir: "/home/jenkins"
id: "XXXXX"
imagePullSecrets:
- name: "XXXXX"
label: "kubeagent"
name: "kubeagent"
podRetention: "never"
serviceAccount: "default"
slaveConnectTimeout: 300
slaveConnectTimeoutStr: "300"
workspaceVolume:
persistentVolumeClaimWorkspaceVolume:
claimName: "jenkins-2480-slave"
readOnly: false
yamlMergeStrategy: "override"