Jenkins: 2.440.1
OS: Linux - 6.1.79-99.164.amzn2023.x86_64
Java: 17.0.10 - Amazon.com Inc. (OpenJDK 64-Bit Server VM)
I am using the latest version of the inbound agent and using Jenkins to deploy this agent via Kubernetes plugin into an EKS cluster. I have also tried other images with tags such as latest-jdk17 but keep running into below error.
Also tried adding JAVA_ARGS environment variable set to hudson.slaves.SlaveComputer.allowUnsupportedRemotingVersions=true for the Pod template but unable to get past this error. Any thoughts on next steps?
- jnlp -- terminated (255)
-----Logs-------------
Mar 12, 2024 5:32:09 PM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: codecommit-1-13-jkxzc-l3vg3-g6z5k
Mar 12, 2024 5:32:09 PM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Mar 12, 2024 5:32:09 PM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.11
Mar 12, 2024 5:32:09 PM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Mar 12, 2024 5:32:09 PM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Mar 12, 2024 5:32:10 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://<my-ip>:8080/]
Mar 12, 2024 5:32:10 PM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: Agent version 4.13 or newer is required.
java.io.IOException: Agent version 4.13 or newer is required.
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:229)
at hudson.remoting.Engine.innerRun(Engine.java:724)
at hudson.remoting.Engine.run(Engine.java:540)
If I run a container using my inbound-agent image, below is the version that I get which is the same as the latest tag. So what am I missing to cause this mismatch between agent version and remoting version?
The kubernetes agent that is being run is not the most recent release of the agent. It could be that there is a configuration error in your Kubernetes agent definition. Double check all the places. It could be that there is a long-lived cache of container images and you’re mistakenly using the latest label. If you are using the latest label, then you should probably switch to a precisely specified version.
Interesting. I created a brand new Cloud9 instance just to avoid that exact issue with cached images and also created a brand new ECR repository. I will check again.
Created a new container image and pushed in a brand new ECR repo using below Dockerfile. Same issue again. Would there be still a possibility of caching somehow? Am I using wrong tag for jenkins/inbound-agent?
FROM bitnami/kubectl:1.28 as kubectl
FROM jenkins/inbound-agent:jdk17
COPY --from=kubectl /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/
USER jenkins
ENTRYPOINT ["/usr/local/bin/jenkins-agent"]
I did docker system prune -a to totally clean up my local docker setup (recovered around 15GB). Rebuilt a brand new image with below Dockerfile using a new tag and pushed it an ECR repo that did not have any previous images. Same issue again. Launched a local container and ran the below command that displays the current version.
Dockerfile:
FROM bitnami/kubectl:1.28 as kubectl
FROM jenkins/inbound-agent:3206.vb_15dcf73f6a_9-5-jdk17
ENV JAVA_ARGS="hudson.slaves.SlaveComputer.allowUnsupportedRemotingVersions=true"
COPY --from=kubectl /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/
USER jenkins
ENTRYPOINT ["/usr/local/bin/jenkins-agent"]
Verified that image really has the latest version of inbound-agent:
I also think that if the agent connected to the Jenkins controller is reporting an outdated version of remoting, then somehow the agent image that you are creating is not the agent image that is being used by the Jenkins controller.
I’ve confirmed that remoting in the jenkins/inbound-agent:3206.vb_15dcf73f6a_9-5-jdk17 container image is version 3206.vb_15dcf73f6a_9.
I configured an inbound agent as instructed in the README and ran the agent with the command:
I genuinely appreciate your help. After exploring all the options, I carefully reviewed the pipeline and we had a step that overrode the image which I would specify in the Jenkinsfile. An older version of the pipeline was active which hardcoded agent configuration in the build step. Please see below. I wasn’t aware that this config will overwrite whatever is specified in the Pod Template configuration on the Jenkins Cloud configuration. After updating the image here, I am able to successfully run the pipeline.