I’m trying to use a docker agent to run my pipeline(s).
However, I’m not really getting the result I wanted.
My desired result:
-Pipeline Triggered on jenkins-master
-A new build-agent (jenkins-agent) is started (docker-container), which will control the build.
-The build-agent runs checkout scm to pull my project
-The build-agent starts up new containers for each of the builds in my project:
– Parallel:
– Build BackEnd (in its own build-container)
– Build FrontEnd (in its own build-container)
-The build-agent does publishing, artifact checking, slack webhooks, etc.
An example of the jenkinsfile I’m currently using:
pipeline {
// This top-level agent should be a docker-container that controls this pipeline (docker-executor)
// It's currently set to any, because that's the only thing that seems to work
agent any
stages {
stage('Checkout')
{
steps {
echo 'do source checkout here'
}
}
// The Stages below should be started & controlled by the top-level agent
stage('Parallel Builds') {
parallel {
stage('Example Maven') {
agent { docker 'maven:3.8.1-adoptopenjdk-11' }
steps {
echo 'Hello, Maven'
sh 'mvn --version'
}
}
stage('Example JDK') {
agent { docker 'openjdk:8-jre' }
steps {
echo 'Hello, JDK'
sh 'java -version'
}
}
}
}
}
}
Current behaviour:
-If I set the top-level agent to any, it uses the jenkins-master as an executor, running ALL of the stages through this executor (requiring 3 executors on the Built-in Node).
-If I don’t have 3 executors on my jenkins-master, it will hang, never completing the pipeline.
I’ve tried adding a Node via the Cloud-Setup (url:8080/configureClouds), but didn’t have much luck there either:
// Docker.sock is forwarded from the host-hardware to jenkins-master. Forward it again.
Docker Host URI: /var/run/docker.sock
Credentials: none
Enabled: true
Expose DOCKER_HOST: true
Container Cap: 0
Template
Labels: -
enabled: true
Docker Image: jenkins/inbound-agent
Mounts: type=bind,source=/var/run/docker.sock=/var/run/docker.sock
Usage: As much as possible
Connect Method: Attach Docker Container
When trying to run this, I get a NullPointerException (at NettyDockerCmdExecFactory.init)
I’m not trying to get a cookie-cutter full pipeline from you guys (which is why the stages currently just have echo-cmds in them.) I’m fine with setting up the stages.
What I’m struggling with is the communication between Jenkins & Docker.
What I’m trying to do:
-click Run Build in Jenkins (on jenkins-master).
-Jenkins-master has 0 executors (best-practices), so it starts up a docker container ‘jenkins-slave’, which handles the pipeline
-Jenkins-slave does some of the early steps (slack-notification, etc), until it hits the build-stages.
-When it hits the build-stages, jenkins-slave should start up containers for each stage (as defined in the jenkinsfile)
Right now I only seem to be able to run anything if it’s handled by jenkins-master, which is a security-issue. If I have no other stages (just the build-stages) jenkins-master can start & stop the required containers just fine. The issue is that I’m trying to get another container (jenkins-slave) to be the executor-node for the pipeline
What are you using for your agents? ssh? docker templates? kubernetes templates? ec2?
If your not using your controller, you need another agent setup. The cloudy ones can spin up a new agent on demand, but if your not using those, you’ll have to have one ready with docker installed. I personally use ssh agent to ssh to the same machine running the controller, but as a different user, so there’s no permission issues. Works great for small scale.
I tried using ssh but those keep failing to boot (ssh failed to start).
I’m now using the jenkins/agent-image as a base (and adding Docker in my own image, as that’s not available in the default image). The Connect Method is ‘Attach Docker Container’.
What’s currently happening:
-An agent is started.
-The ‘Checkout-Stage’ (see jenkinsfile-example above) runs on the agent
-Two more agents are started. These shouldn’t be started. They’re not actually doing anything (but they seem to be handling the start/stop of the other containers?)
-The Maven- & OpenJDK-containers are started.
-The echo’s run fine.
-The containers get stuck on the sh-commands
-After a while, the pipeline is aborted: process apparently never started in /home/jenkins/… for both
So, my questions now are:
-Why are there 3 containers? (It seems to be spinning up jenkins-agents to run the other docker-agents?) Do I need to set the # of executors on the agent somehow?
-Why are the sh-steps failing/getting stuck? They shouldn’t need anything, and should run fine on the containers themselves. Could there be an issue with the communication back to jenkins?
Edit: Setting the Remote File System Root to /home/jenkins/agent instead of /home/jenkins fixed the sh-commands getting stuck.
However: It’s still spinning up an additional jenkins-agent for each of the non-agent stages in the pipeline. Can I get rid of this additional agent?
Example: The jenkinsfile below starts up 3 containers. Two jenkins-agent containers, and the maven-container. Can I get rid of the second jenkins-agent container (which seems to just be starting/stopping the maven container?)