Hello,
I am trying to reach community members, get their ideas and thoughts.
Are you able to build and push docker images in declartive pipelines;Is respository ECR.
If other then kaniko container tools. Please can you provide details about the tool.
When using kaniko container tool have you encountered any issues in shell script stage like below.
wrapper script does not seem to be touching the log file in /home/jenkins/workspace/XXXX-XXXX-XXXX-CICD@tmp/durable-e0c32a63 13:49:19 (JENKINS-48300: if on an extremely laggy filesystem, consider -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=86400)
I have gone through some blogs related to this topic.
opened 12:56PM - 02 Feb 22 UTC
closed 07:23PM - 06 Feb 22 UTC
bug
### Jenkins and plugins versions report
<details>
<summary>Environment</summ… ary>
```text
Paste the output here
```
</details>
### What Operating System are you using (both controller, and any agents involved in the problem)?
Kubernetes with linux nodes on: CentOS Linux 7 (Core), 3.10.0-1160.49.1.el7.x86_64, cri-o://1.18.4, v1.21.3
Jenkins in containers - jenkins/jenkins:2.333-jdk11
Jenkins kubernetes plugin - 1.31.3
JNLP version - jenkins/inbound-agent:4.11-1-jdk11
### Reproduction steps
I use pipeline that looks like this:
`/**
* This pipeline will build and deploy a Docker image with Kaniko
* https://github.com/GoogleContainerTools/kaniko
* without needing a Docker host
*
* You need to create a jenkins-docker-cfg secret with your docker config
* as described in
* https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token
*/
podTemplate(yaml: '''
kind: Pod
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.6.0-debug
imagePullPolicy: Always
command:
- sleep
args:
- 99d
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /kaniko/.docker
volumes:
- name: jenkins-docker-cfg
projected:
sources:
- secret:
name: nexus-registry
items:
- key: .dockerconfigjson
path: config.json
'''
)
{
node(POD_LABEL) {
stage('Clone Git') {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], extensions: [], userRemoteConfigs: [[url: 'project.git']]])
}
stage('Build with Kaniko') {
container('kaniko') {
sh '/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --use-new-run=true --destination=registry'
}
}
stage ('Restart Deployment in K8S') {
withKubeConfig([credentialsId: 'k8s', serverUrl: 'https://10.96.0.1']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl rollout restart deployment api-server -n stokky'
}
}
}
}
`
After GIT stage kaniko will build the image and push it to the registry but after that, I can see in logs:
`[36mINFO[0m[0072] Pushed image to 1 destinations
wrapper script does not seem to be touching the log file in /home/jenkins/agent/workspace/restapi-build-deploy@tmp/durable-dc65ccdd
(JENKINS-48300: if on an extremely laggy filesystem, consider -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=86400)`
### Expected Results
After Kaniko pushed the image it should go to the next stage but it doesnt
### Actual Results
Pipeline fails after some timeout and it seems that after kaniko pushed the image, JNLP agent hangs and loose connection to the Jenkins controller.
### Anything else?
In jenkins logs i can only see:
Feb 02, 2022 12:53:38 PM FINE org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecProc
process is no longer alive
But I don't think it's related.
No errors in Jenkins log or in jnlp container.
Tried change plugin version, jenkins version, kaniko - no luck.
How to fix this issue ?
opened 07:20PM - 28 Apr 20 UTC
priority/p3
area/usability
Hey,
we want Jenkins to start a pod with two containers, JNLP and Kaniko, for… creating images, but when running our Kaniko Container as root, the "sh" command can't be executed in our pipeline won't continue to run.
In this issue: https://github.com/GoogleContainerTools/kaniko/issues/653, someone seems to have a similiar problem. But sadly we saw that this issue was closed without telling how it was solved.
Now we have the pretty similiar problem.
The JNLP container is defined in our Jenkins.
This is our yml definition of the pod and kaniko container:
```
kind: "Pod"
spec:
restartPolicy: Never
containers:
- name: kaniko
image: 'internal-registry/gcr.io/kaniko-project/executor:debug-v0.19.0'
imagePullPolicy: 'Always'
command:
- /busybox/cat
tty: true
workingDir: /home/jenkins/agent
env:
- name: "JENKINS_AGENT_WORKDIR"
value: "/home/jenkins/agent"
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins/agent
readOnly: false
- name: kaniko-docker-volume
mountPath: /kaniko/.docker
- name: ca-certificates-volume
mountPath: /kaniko/ssl/certs/ca-certificates.crt
subPath: ca-certificates.crt
- name: system-tmp-volume
mountPath: /tmp
readOnly: false
securityContext:
readOnlyRootFilesystem: false
runAsUser: 0
volumes:
- name: workspace-volume
emptyDir:
medium: "Memory"
- name: kaniko-docker-volume
emptyDir:
medium: "Memory"
- name: ca-certificates-volume
configMap:
name: ca-certificates
items:
- key: cert.pem
path: ca-certificates.crt
- name: system-var-volume
emptyDir:
medium: "Memory"
- name: system-tmp-volume
emptyDir:
medium: "Memory"
```
This is what we want to do with our pipeline:
```
podTemplate(yaml: yaml, showRawYaml: true) {
node(POD_LABEL) {
// --- JNLP ---
stage('Checkout') {
container(name: 'jnlp') {
workspacePath = sh(script: 'echo `pwd`', returnStdout: true).trim()
writeFile file: "Dockerfile", text: """
FROM debian/stretch:1.0-Final.0
CMD ["/bin/bash"]
"""
}
containerLog(name: 'jnlp')
}
// --- KANIKO ---
stage('build') {
try {
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox:/kaniko']) {
echo sh(script: "id", returnStdout: true).trim()
}
withCredentials([[$class: 'VaultUsernamePasswordCredentialBinding',
credentialsId: 'vault', usernameVariable: 'USERNAME',
passwordVariable: 'PASSWORD']]) {
writeFile file: "config.json", text: """{ "auths": {
"https://internal-registry": { "username": "$USERNAME",
"password": "$PASSWORD" } } }"""
}
sh 'mv `pwd`/config.json /kaniko/.docker/config.json'
sh '/kaniko/executor --verbosity debug
--dockerfile `pwd`/Dockerfile --context dir://`pwd`
--insecure --skip-tls-verify
--destination gcr.io/kaniko-project/executor:debug--swp'
} finally {
containerLog(name: 'jnlp')
containerLog(name: 'kaniko')
}
}
}
}
```
We want Kaniko to read the Dockerfile (which will later be not written in this part of the code) and build an image and push it to our internal registry.
We already decided, that we need to run Kaniko as root user as it wants to copy the Dockerfile directly at the start of the script to the path /kaniko/Dockerfile and the container needs permissions to do so.
But when we use: runAsUser: 0, the container can't run the sh command and nothing happens. Why can't the sh command be executed when running kaniko as root?
Both container are up and running and the only thing our log is showing to us is:
```
2020-04-28T18:35:54.461 [Pipeline] stage
2020-04-28T18:35:54.464 [Pipeline] { (build)
2020-04-28T18:35:54.515 [Pipeline] container
2020-04-28T18:35:54.517 [Pipeline] {
2020-04-28T18:35:54.570 [Pipeline] withEnv
2020-04-28T18:35:54.571 [Pipeline] {
2020-04-28T18:35:54.618 [Pipeline] sh
```
Best regards.
Any help is appreciated.
Appreciate your thoughts and feedback.