Not worked docker in pipeline

Hi
I create just pipeline with docker image

pipeline {
agent {
docker {
image ‘docker/whalesay’

         }
         
}
stages {
    stage('Test') {
        steps {
           
        }
    }
}

}

i new user jenkins . and i have this problem

Started by user konorus [Pipeline] Start of Pipeline ([hide](http ://192.168.0.106:8080/job/pipeline-build/51/console#)) [Pipeline] node Running on [Jenkins](http ://192.168.0.106:8080/computer/(built-in)/) in /var/lib/jenkins/workspace/pipeline-build [Pipeline] { [Pipeline] isUnix [Pipeline] withEnv [Pipeline] { [Pipeline] sh + docker inspect -f . docker/whalesay Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. . [Pipeline] } [Pipeline] // withEnv [Pipeline] withDockerContainer Jenkins does not seem to be running inside a container $ docker run -t -d -u 975:974 -w /var/lib/jenkins/workspace/pipeline-build -v /var/lib/jenkins/workspace/pipeline-build:/var/lib/jenkins/workspace/pipeline-build:rw,z -v /var/lib/jenkins/workspace/pipeline-build@tmp:/var/lib/jenkins/workspace/pipeline-build@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** docker/whalesay cat $ docker top 5056052fa982e9d3a5e8fdcc8f16b86252accc13bf0ff7ecc3f0fdf7ed48f203 -eo pid,comm [Pipeline] // withDockerContainer [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline java.io.IOException: Failed to run top ‘5056052fa982e9d3a5e8fdcc8f16b86252accc13bf0ff7ecc3f0fdf7ed48f203’. Error: Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. Error: top can only be used on running containers

I don’t know any useful answers specifically, but I do know podman and docker are not exact matches. Have you looked at any podman specific integration?

1 Like

Did you find a solution to this? I am running into the same problem. We have a RHEL 8 server for the agent which does not support Docker natively and instead has podman. I am able to do most things manually with docker commands (which get aliased to podman) but our container based Jenkins jobs throw the same “top can only be used on running containers” error.

I am able to manually spin up the agent container using the same docker run command that’s in the job log as well as run the subsequent top command and it works. Wondering if it’s a timing issue of some kind.

Edit: I was able to replicate the issue manually outside of the Jenkins job and I suspect it might have to do with trying to run rootless podman. When I run the following command as the root user it works, but if I run it as the _jenkinsuser account that the Jenkins jobs run under I get the top error.

uuid=$(uuidgen) && \
docker run --cidfile /tmp/docker-$uuid.cid -t -d -u 169654:169654 -w "/home/_jenkinsuser/remote/workspace/Automation/Docker agent test" -v "/home/_jenkinsuser/remote/workspace/Automation/Docker agent test:/home/_jenkinsuser/remote/workspace/Automation/Docker agent test:rw,z" -v "/home/_jenkinsuser/remote/workspace/Automation/Docker agent test@tmp:/home/_jenkinsuser/remote/workspace/Automation/Docker agent test@tmp:rw,z" automationservices-docker-agents.artifactory.bclc.com/lin-base cat && \
cat /tmp/docker-$uuid.cid | podman top `xargs` -eo user,pid,comm
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
9df68105db686d640df06654000a5f10c6938c8312eff8e5d40787aef5835df6
Error: top can only be used on running containers

I was having several issues with rootless podman before even getting this far.

1 Like

Did some more debugging and found that the container is exiting right away after it’s created

$ docker ps -a
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID  IMAGE                                                                  COMMAND     CREATED             STATUS                      PORTS       NAMES
dfa4b0f3ef52  asdf/lin-base:latest  cat         About a minute ago  Exited (127) 3 seconds ago              frosty_yalow

The podman log shows this…

[_jenkinslotto@kam1odbus100<>:~]$ podman logs frosty_yalow
cat: error while loading shared libraries: /lib64/libc.so.6: cannot apply additional memory protection after relocation: Permission denied
cat: error while loading shared libraries: /lib64/libc.so.6: cannot apply additional memory protection after relocation: Permission denied
1 Like

Looks like an SELinux related problem.

Temporarily disabling it allows the container to start properly and the Jenkins job no longer fails with that error. However the lob hangs indefinitely even on a simple “hello world” echo so YMMV. Still not sure how to get SELinux to work nicely with this but at least we know root cause, at least in my case.

SELinux is known to cause other problems for Jenkins users. See JENKINS-64913 for details related from one user with their git operations. See JENKINS-67497 for SELinux related issues on a STIG hardened machine with SELinux enabled. See JENKINS-65395 for a problem with Docker on an SELinux configuration.

I’m not aware of anyone using podman for Jenkins agents with containers. That doesn’t mean there isn’t anyone doing it, but I’m not aware of anyone that has stated they are using it.

1 Like

The main reason we’re using podman in this case is because RHEL 8 and beyond drop direct support for docker and use podman as an alternative with docker as an alias. Though you can still install docker-ce there are some caveats to that approach we’ve run into. There seems to be a shift going on from docker to alternative like podman so this will likely become more common as time goes on and it is the direction were going to try and go.

I was able to solve the SELinux issues with the commands mentioned here. Don’t recall if these are the exact ones that solve it as I tried some others beforehand too but this should get you in the ballpark for a solution.

The current error I’m running into is this. I tried disabling SELinux and there was no change so I don’t believe that’s the issue any longer.

process apparently never started in /home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-96028e25
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)

It works if I use the args "-u root" option but we of course don’t want to do that.

I set the LAUNCH_DIAGNOSTICS option in the script console via org.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true and now get a long stream of these types of errors in the job log

Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
sh: /home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt: Permission denied
sh: /home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-result.txt.tmp: Permission denied
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
mv: cannot stat '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-result.txt.tmp': No such file or directory
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied

Which may point to this but I am not using kubernetes so I’m not sure how to set what they suggest.

Can the uid running jenkins inside the container write to that directory?

If I’m understanding correctly I believe the answer is yes.

The _jenkinslotto user is the one used to execute things on the underlying agent VM. It’s uid (169654) is the one seen in the docker run command in the job log. Example:

$ docker run -t -d -u 169654:169654 -w "/home/_jenkinslotto/remote/workspace/Automation/Docker agent test" ...

It has full access to the workspace and subdirectories to run all the jenkins jobs on the agent and I verified with the following

$ touch /home/_jenkinslotto/remote/workspace/Automation/test.txt
$

Additionally I’ve created a user with the same name / uid exists in the image / container. We’d started doing this a while back in our images to solve other issues that came up. In this case it did not seem to make a difference.

RUN groupadd _jenkinslotto    -g 169654 && useradd -u 169654 -g 169654 _jenkinslotto

Admittedly the inside vs outside user contexts sometimes get a little fuzzy for me so it’s quite possible there’s something we’re not doing right or could do better.

groupadd doesn’t really do anything inside a docker container. It just adds that stuff to /etc/passwd and /etc/group which does mapping, but doens’t give access.

when you are inside a container you can run id and it should tell you all the uids your user has. If you want the user inside of the container to have more groups, you need to add --group-add to your docker run.

So the docker run in the job log would be stuff happening inside a …sub container? DIND gets very confusing very fast.

The original error message was about the agent being unable to write to the directory, not the docker run inside of the job.

What are you using to run your agents? SSH? JNLP? Cloud provider plugin? Docker plugin? I’m talking specifically about the agent, not the job

For example from ci.jenkins.io, if you look at the “Build Executor Status” section of the side bar it should list the active agents.
image
If you click on one of the agent names. you’ll get the agent information. I’m not really sure what will tell you what type of agent it is, but knowing that it’ll inform next steps.

To clarify the docker run command is the initial commanded used to spin up the container on the agent VM. We’re not doing any DinD. Here’s the rest of the log output from that part of the job.

[Pipeline] withDockerContainer
Linux does not seem to be running inside a container
$ docker run -t -d -u 169654:169654 -w "/home/_jenkinslotto/remote/workspace/Automation/Docker agent test" -v "/home/_jenkinslotto/remote/workspace/Automation/Docker agent test:/home/_jenkinslotto/remote/workspace/Automation/Docker agent test:rw,z" -v "/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp:/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp:rw,z" -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** automationservices-docker-agents.artifactory.****.com/lin-base:dev cat
$ docker top 66d495dd8a21e1b7ceb8e089d0d9b41a13924a18616172b6c98f88ca575efc3d -eo pid,comm

That comes from the docker-workflow plugin I believe which is driven by this in the jenkinsfile. Once the plugin spins up the container it runs the rest of the job inside it, I think it uses docker exec for that, but not sure. Thought I had included the jenkinsfile in a previous comment but I might have edited it out inadvertently.

pipeline{
    agent{
      docker{
        label "linux-docker"
        //args "-u root"
        image "automationservices-docker-agents.artifactory.****.com/lin-base:dev"
      }
    }
    stages{
       stage("Example Stage"){
         steps{
           echo "hello world" // Works
           sh "touch test.txt" // fails with "process apparently never started in..." 
         }
       }
    }
}

What are you using to run your agents?

The agents are RHEL 8 VMs that the controller connects to via SSH using the Launch agents via SSH launch method. After the docker run command is execute by the docker-workflow plugin the rest of the steps happen inside the container. I think it uses docker exec to accomplish that behind the scenes but not sure as it does not present in the logs.

I’ve included the full job output which might reveal some more. I noticed that…

  1. The echo command works fine, it’s just the sh command (so far) that does not
  2. The permissions errors appear in the log even after the container is stopped but that might be a timing thing. I’ve noticed that errors in job logs can sometimes appear well after the actual error occurred in terms of position in the log file.
Started by user *******
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Linux in /home/_jenkinslotto/remote/workspace/Automation/Docker agent test
[Pipeline] {
[Pipeline] isUnix
[Pipeline] withEnv
[Pipeline] {
[Pipeline] sh
+ docker inspect -f . automationservices-docker-agents.artifactory.****.com/lin-base:dev
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] withDockerContainer
Linux does not seem to be running inside a container
$ docker run -t -d -u 169654:169654 -w "/home/_jenkinslotto/remote/workspace/Automation/Docker agent test" -v "/home/_jenkinslotto/remote/workspace/Automation/Docker agent test:/home/_jenkinslotto/remote/workspace/Automation/Docker agent test:rw,z" -v "/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp:/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp:rw,z" -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** automationservices-docker-agents.artifactory.****.com/lin-base:dev cat
$ docker top 66d495dd8a21e1b7ceb8e089d0d9b41a13924a18616172b6c98f88ca575efc3d -eo pid,comm
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Example Stage)
[Pipeline] script
[Pipeline] {
[Pipeline] }
[Pipeline] // script
[Pipeline] echo
hello world
[Pipeline] sh
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
sh: /home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt: Permission denied
sh: /home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-result.txt.tmp: Permission denied
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
mv: cannot stat '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-result.txt.tmp': No such file or directory
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
... (repeats many many times)
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
process apparently never started in /home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 66d495dd8a21e1b7ceb8e089d0d9b41a13924a18616172b6c98f88ca575efc3d
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
touch: cannot touch '/home/_jenkinslotto/remote/workspace/Automation/Docker agent test@tmp/durable-7811e7e7/jenkins-log.txt': Permission denied
$ docker rm -f 66d495dd8a21e1b7ceb8e089d0d9b41a13924a18616172b6c98f88ca575efc3d
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code -2
Finished: FAILURE