Adding 'kubectl' container to pipeline not working

Top of my pipeline looks like so:

  pipeline {
    agent {
      kubernetes {
        yaml '''
          apiVersion: v1
          kind: Pod
          spec:
            containers:
            - name: dotnet
              image: mcr.microsoft.com/dotnet/sdk:5.0
              command:
              - cat
              tty: true
            - name: docker
              image: docker:dind
              tty: true
              securityContext:
              privileged: true
            - name: kubectl
              image: bitnami/kubectl
              command:
              - "sleep"
              - "240"
              tty: true
          '''
      }
   }

Everything works except the ‘kubectl’ container. I tried using ‘cat’ like in the previous, but that didn’t work so I switched to sleep which keeps the container from exiting, but the command never runs inside the container:

    stage('Deploy new image') {
      steps {
        container('kubectl') {
          sh 'find'
          sh 'kubectl version'
          withCredentials([file(credentialsId: 'kubeconfig-vc-non-admin', variable: 'TMPKUBECONFIG')]) {
            sh "cp \$TMPKUBECONFIG /.kube/config"
          }
        }
      }
    }

What didn’t work about it?

the ‘cat’ command exited immediately, causing the ‘kubectl’ container to exit immediately, which caused the pipline to exit, so i switched to the ‘sleep’ command … which works to keep the container alive … but it just seems to hang the length of time I’ve set the sleep for.

as i’ve performed various troubleshooting it sometimes gets to the ‘sh’ command and just hangs there.

the plugin documentation, Kubernetes, mentions a hanging issue and says to try adding:

spec.securityContext:
  runAsUser: 1000

which resulted in seeing what looks like debug output from my other commands but didn’t ever run the kubectl command i was hoping to see to verify i was successfully running something in the kubectl container.

of my 3 containers the docker container needs the privileged securityContext whereas the others don’t, i also tried reordering them moving the docker container to be last, but order didn’t seem to matter, assigning them all privileged caused more errors

1 Like

the full jenkins file:

pipeline {
  agent {
    kubernetes {
      yaml '''
        apiVersion: v1
        kind: Pod

        spec:
#          securityContext:
#            runAsUser: 1000
          containers:
          - name: dotnet
            image: mcr.microsoft.com/dotnet/sdk:5.0
            command:
            - cat
            tty: true
#            securityContext:
#              privileged: true
          - name: kubectl
            image: bitnami/kubectl:latest
            command:
            - sleep
            args:
            - 99d
            tty: true
#            securityContext:
#              privileged: true
          - name: docker
            image: docker:dind
            tty: true
            securityContext:
              privileged: true

        '''
    }
  }
  environment {
    REGISTRY = "harbor.vc-prod.k.home.net"
    HARBOR_CREDENTIAL = credentials('robot-jenkins')
  }
  stages {
    stage('Git clone') {
      steps {
        git branch: 'main', credentialsId: 'github-asdf',
          url: 'https://github.com/asdf/p-lido-test.git'
      }
    }
    stage('Run dotnet') {
      steps {
        container('dotnet') {
          sh 'dotnet --info'
          sh 'dotnet publish --configuration Debug --runtime linux-x64 --self-contained true'
          sh 'find'
        }
      }
    }
    stage('Build container') {
      steps {
        container('docker') {
          sh 'docker version'
          withCredentials([file(credentialsId: 'ca-bundle-pem-format', variable: 'CABUNDLE')]) {
            sh "cp \$CABUNDLE /etc/ssl/certs/ca-bundle.crt"
          }
          sh '#docker logout harbor.vc-prod.k.home.net'
          sh '''echo $HARBOR_CREDENTIAL_PSW | docker login $REGISTRY -u $HARBOR_CREDENTIAL_USR --password-stdin'''
          sh "#cat \$HOME/.docker/config.json"
          sh "docker build -t 'harbor.vc-prod.k.home.net/lido/test:0.0.${BUILD_NUMBER}' ."
          sh "docker image push 'harbor.vc-prod.k.home.net/lido/test:0.0.${BUILD_NUMBER}'"
          sh 'docker build -t "harbor.vc-prod.k.home.net/lido/test:latest" .'
          sh 'docker image push "harbor.vc-prod.k.home.net/lido/test:latest"'
        }
      }
    }
    stage('Deploy new image') {
      steps {
        container('kubectl') {
          sh 'find'
          sh 'kubectl version'
          sh '#ls / -a'
          withCredentials([file(credentialsId: 'kubeconfig-vc-non-admin', variable: 'TMPKUBECONFIG')]) {
            sh "cp \$TMPKUBECONFIG /.kube/config"
          }
        }
      }
    }
  }
}

I posted the full jenkins file, but it disappeared, may have been temporarily flagged as spam …

here’s a code share instead:
https://codeshare.io/LwoZ6K

I can go over to kubernetes and create a pod using the same pod definition in that jenkins file (adding a metadata.name to it), and I can then k exec -it test -c kubectl – bash and run ‘kubectl version’ and other kubectl commands. Not sure why it isn’t working in the pipeline to run the ‘kubectl version’ command.

So I’m thinking that means it either doesn’t have cat, or it’s a non standard version. Without being at a computer I can’t check.

I know the default command is cat to allow it to pass commands in and out of the container. I’ve never really looked how or why it works.

You may want to think about just downloading the static binary, it’s pretty small, or making a new image with all the tools you need. That way you know you have bash and cat properly.

I wish I could understand better why it failed, as it leaves me feeling the kubernetes plugin might be a bit unstable.

… in any case your suggestion to just download kubectl has got me back on track, I’m using a busybox container along with wget and withCredentials to save the kubeconfig. Will see about using a proxy so it isn’t actually downloading the binary every time.

thanks

I wouldn’t call it unstable, i would call it picky. I think it needs cat, or something like it, to translate java code into the container. Not sure about the specifics though. Its how the docker stuff works, not really k8s specific

can you please publish your files and pipeline with the correct configurations?

Was have the same issue. The fix for me was adding runAsUser: 1000.

pipeline {
  agent {
    kubernetes {
      yaml '''
        apiVersion: v1
        kind: Pod
        metadata:
          name: kubectl
        spec:
          containers:
          - name: kubectl
            image: docker.io/bitnami/kubectl
            command:
            - cat
            tty: true
            securityContext:
              runAsUser: 1000
      '''
    }
  }

  environment {
    KUBE_CONTEXT=credentials('Some_context')
  }

    stage('DoKubectl') {
      steps {
        container('kubectl') {
          sh 'cp $KUBE_CONTEXT KUBECONFIG; chmod 444 KUBECONFIG'
          sh "export KUBECONFIG=./KUBECONFIG; kubectl config view"
          sh 'export KUBECONFIG=./KUBECONFIG; kubectl get pods -A'
        }
      }
    }
  }
}