How to run kubectl in pipeline

Hi all,
This is my pipeline

pipeline {
    agent {
        kubernetes {
            yamlFile 'Jenkins-agent-pod.yaml'
        }
    }
 
    environment {
        DOCKERFILE_PATH = 'Dockerfile'
        DOCKER_IMAGE_NAME = 'myrepo/myapp:tag'
    }
 
    stages {
        
        stage('Build and Push Docker Image with Kaniko') {
            steps {
            container(name: 'kaniko', shell: '/busybox/sh') {
                withCredentials([file(credentialsId: 'dockerhub', variable: 'DOCKER_CONFIG_JSON')]) {
                    withEnv(['PATH+EXTRA=/busybox']) {
                        sh '''#!/busybox/sh
                            cp $DOCKER_CONFIG_JSON /kaniko/.docker/config.json
                            /kaniko/executor --context `pwd` --dockerfile $DOCKERFILE_PATH --destination $DOCKER_IMAGE_NAME
                        '''
                        }
                    }
                }
            }
        }
        
        stage('Deploy App to Kubernetes') {     
            steps {
            container('kubectl') {
                withCredentials([file(credentialsId: 'k3s-hq-admin', variable: 'KUBECONFIG')]) {
                    sh 'kubectl get nodes'
                    }
                }
            }
        }
    }
}

This is my Jenkins-agent-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    type: jenkins-agent
spec:
  #securityContext:
  #  runAsUser: 1000
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:v1.17.0-debug
    command:
        - sleep
    args:
        - 99d
  - name: kubectl
    image: bitnami/kubectl:latest
    command:
      - "/bin/sh"
      - "-c"
      - "sleep 99d"
  restartPolicy: Never

The pipeline run stage ‘Build and Push Docker Image with Kaniko’ successfully but failed at stage ‘Deploy App to Kubernetes’
Here the output

Started by user admin
Obtained Jenkinsfile from git https://gitlab.mydomain.com/myrepo/myproject
[Pipeline] Start of Pipeline
[Pipeline] readTrusted
Obtained Jenkins-agent-pod.yaml from git https://gitlab.mydomain.com/myrepo/myproject
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Created Pod: k3s-hq default/test-kaniko-45-885d6-rwss0-p7r25
Still waiting to schedule task
‘test-kaniko-45-885d6-rwss0-p7r25’ is offline
Agent test-kaniko-45-885d6-rwss0-p7r25 is provisioned from template test-kaniko_45-885d6-rwss0
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations:
    buildUrl: "http://192.168.7.112:8080/job/test-kaniko/45/"
    runUrl: "job/test-kaniko/45/"
  labels:
    type: "jenkins-agent"
    jenkins: "slave"
    jenkins/label-digest: "a078c72d475e351dcb00171721647cb4eba063a1"
    jenkins/label: "test-kaniko_45-885d6"
  name: "test-kaniko-45-885d6-rwss0-p7r25"
spec:
  containers:
  - args:
    - "99d"
    command:
    - "sleep"
    image: "gcr.io/kaniko-project/executor:v1.17.0-debug"
    name: "kaniko"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - command:
    - "/bin/sh"
    - "-c"
    - "sleep 99d"
    image: "bitnami/kubectl:latest"
    name: "kubectl"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_AGENT_NAME"
      value: "test-kaniko-45-885d6-rwss0-p7r25"
    - name: "JENKINS_WEB_SOCKET"
      value: "true"
    - name: "JENKINS_NAME"
      value: "test-kaniko-45-885d6-rwss0-p7r25"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://192.168.7.112:8080/"
    image: "jenkins/inbound-agent:3192.v713e3b_039fb_e-1"
    name: "jnlp"
    resources:
      requests:
        memory: "256Mi"
        cpu: "100m"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  nodeSelector:
    kubernetes.io/os: "linux"
  restartPolicy: "Never"
  volumes:
  - emptyDir:
      medium: ""
    name: "workspace-volume"

Running on test-kaniko-45-885d6-rwss0-p7r25 in /home/jenkins/agent/workspace/test-kaniko
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Declarative: Checkout SCM)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
The recommended git tool is: NONE
using credential jack.chuong
Cloning the remote Git repository
Cloning repository https://gitlab.mydomain.com/myrepo/myproject
 > git init /home/jenkins/agent/workspace/test-kaniko # timeout=10
Fetching upstream changes from https://gitlab.mydomain.com/myrepo/myproject
 > git --version # timeout=10
 > git --version # 'git version 2.39.2'
using GIT_ASKPASS to set credentials 
 > git fetch --tags --force --progress -- https://gitlab.mydomain.com/myrepo/myproject +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://gitlab.mydomain.com/myrepo/myproject # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
Avoid second fetch
Checking out Revision 88da5419563613ad28acb58c6c946e818aec264d (refs/remotes/origin/main)
Commit message: "Update Jenkins-agent-pod.yaml"
 > git rev-parse refs/remotes/origin/main^{commit} # timeout=10
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 88da5419563613ad28acb58c6c946e818aec264d # timeout=10
 > git rev-list --no-walk 96979b7db007164115c852ea9738ed2a816faf6a # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build and Push Docker Image with Kaniko)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $DOCKER_CONFIG_JSON
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
e[36mINFOe[0m[0003] Retrieving image manifest ubuntu:20.04       
e[36mINFOe[0m[0003] Retrieving image ubuntu:20.04 from registry index.docker.io 
e[36mINFOe[0m[0006] Built cross stage deps: map[]                
e[36mINFOe[0m[0006] Retrieving image manifest ubuntu:20.04       
e[36mINFOe[0m[0006] Returning cached image manifest              
e[36mINFOe[0m[0006] Executing 0 build triggers                   
e[36mINFOe[0m[0006] Building stage 'ubuntu:20.04' [idx: '0', base-idx: '-1'] 
e[36mINFOe[0m[0006] Skipping unpacking as no commands require it. 
e[36mINFOe[0m[0006] WORKDIR /app                                 
e[36mINFOe[0m[0006] Cmd: workdir                                 
e[36mINFOe[0m[0006] Changed working directory to /app            
e[36mINFOe[0m[0006] Creating directory /app with uid -1 and gid -1 
e[36mINFOe[0m[0006] Taking snapshot of files...                  
e[36mINFOe[0m[0006] CMD ["bash"]                                 
e[36mINFOe[0m[0006] Pushing image to myrepo/myapp:tag 
e[36mINFOe[0m[0010] Pushed index.docker.io/myrepo/buildtool@sha256:11b3ec84edadf4b21168eff7cf745d86f60708e5e78df9ccb0aeedbb4c847fca 
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy App to Kubernetes)
[Pipeline] container
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG
[Pipeline] {
[Pipeline] sh
process apparently never started in /home/jenkins/agent/workspace/test-kaniko@tmp/durable-61080fa4
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code -2
Finished: FAILURE

The credentialsId: 'k3s-hq-admin' is the same jenkins credential for connecting jenkins with k8s cluster.
How can I troubleshoot it ? Please give me some advice, thank you very much.

It worked, here my Jenkins-agent-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    type: jenkins-agent
spec:
  #securityContext:
  #  runAsUser: 1000
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:v1.17.0-debug
    command:
      - sleep
    args:
      - 99d
  - name: kubectl
    image: bitnami/kubectl:latest
    command:
      - "/bin/sh"
      - "-c"
      - "sleep 99d"
    tty: true
    securityContext:
      runAsUser: 0
  restartPolicy: Never

My pipeline

pipeline {
  agent {
    kubernetes {
      yamlFile 'Jenkins-agent-pod.yaml'
      }
    }

  environment {
    DOCKERFILE_PATH = 'Dockerfile'
    DOCKER_IMAGE_NAME = 'myrepo/myapp:tag'
  }

  stages {

    stage('Build and Push Docker Image with Kaniko') {
      steps {
      container(name: 'kaniko', shell: '/busybox/sh') {
        withCredentials([file(credentialsId: 'dockerhub', variable: 'DOCKER_CONFIG_JSON')]) {
          withEnv(['PATH+EXTRA=/busybox']) {
            sh '''#!/busybox/sh
              cp $DOCKER_CONFIG_JSON /kaniko/.docker/config.json
              /kaniko/executor --context `pwd` --dockerfile $DOCKERFILE_PATH --destination $DOCKER_IMAGE_NAME
            '''
            }
          }
        }
      }
    }

    stage('Deploy to Kubernetes') {
      steps {
      container(name: 'kubectl', shell: '/bin/sh') {
        withCredentials([file(credentialsId: 'k3s-hq-admin', variable: 'KUBECONFIG')]) {
          sh "echo $KUBECONFIG > /.kube/config"
          sh "kubectl apply -f manifest.yaml"
          }
        }
      }
    }
  }
}

Sir i did as u did but got this error

error: error loading config file "****": yaml: line 31: could not find expected ':'

deployment stage:

  stage('Deploy on EKS') {
            steps {
                container (name:'kubectl', shell: '/bin/sh' ) {
                    withCredentials([file(credentialsId: 'KUBECONFIGFILE', variable: 'KUBECONFIG')]) {
                    sh '''
                        echo $KUBECONFIG > /.kube/config &&
                        apt-get update &&
                        apt-get install -y awscli &&  
                        aws eks --region eu-west-1 update-kubeconfig --name my-cluster &&
                        aws configure list &&
                        aws --version &&

                        kubectl apply -f /k8s/app/app-ns.yaml

                    '''
                    }
                }
            }
        }

for more debugging info, i checked that i manage to update kubeconfig context, and successfully logged in to my iam account on AWS as shown below:

+ aws eks --region eu-west-1 update-kubeconfig --name my-cluster
Updated context arn:aws:eks:eu-west-1:542649309338:cluster/my-cluster in ****
+ aws configure list
      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                <not set>             None    None
access_key     ****************O55U         iam-role    
secret_key     ****************CN7k         iam-role    
    region                <not set>             None    None
+ aws --version
aws-cli/1.19.1 Python/3.9.2 Linux/5.10.198-187.748.amzn2.x86_64 botocore/1.20.0
+ kubectl apply  -f /k8s/app/app-ns.yaml
error: error loading config file "****": yaml: line 31: could not find expected ':'

Can anyone help me ?