Build Docker Images as part of Jankins on Kubernetes

I have Jenkins deployed on kubernetes, where one node hosted the jenkins pod and another node for the workers tasks. It was running on EKS and using kubernets version 1.21.

This is how my pipeline looked:

pipeline {
    agent {
        kubernetes {
            defaultContainer 'jnlp'
            yaml """
apiVersion: v1
kind: Pod
spec:
  nodeSelector:
    illumex.ai/noderole: jenkins-worker
  containers:
  - name: docker
    image: docker:latest
    imagePullPolicy: Always
    command:
    - cat
    tty: true
    volumeMounts:
    - mountPath: /var/run/docker.sock
      name: docker-sock
  volumes:
  - name: docker-sock
    hostPath:
      path: /var/run/docker.sock
"""
        }
    }

    stages {
        stage('build') {
            steps {
                container('system') {
                        sh """
                        docker system prune -f
                        """

Then, I upgraded kubernetes from 1.21 to 1.24, and my pipeline fails with the following error:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

So some questions:

  1. I guess it means that docker can’t be mounted when using kubernetes?
  2. What will be the right approach in this scenario?
  3. I’m trying to use the “docker-dind” image (docker in docker), but then, how can I make my docker builds cached?

Thank you

I’m pretty sure modern kubernetes no longer uses docker, but cri, so there’s no /var/run/docker.sock to mount.

You’d have to either run your own docker inside k8s, or use some sort of rootless build engine like img/buildkit, or kaniko or something.

1 Like