io.fabric8.kubernetes.client.KubernetesClientException: Received 401 on websocket . Message: Unauthorized

-Description:

Jenkins controller: 2.440.3

kubernetes-plugin: 4246.v5a_12b_1fe120e

Kubernetes Client API: 6.10.0-240.v57880ce8b_0b_2

RKE2 version: v1.25.4+rke2r1

Jnlp container: jenkins/inbound-agent:latest-jdk17

-Current setup:

Jenkins controller deployed in rke2 cluster Jenkins agents getting dispatched to k8s worker nodes and in the same namespace as the controllerAll infra runs on vmware

-Problem Statement:

Jenkins agent pod containing 9 to 11 containers occasionally run into the following error :

io.fabric8.kubernetes.client.KubernetesClientException: Received 401 on websocket. Failure executing: GET at: https://kubernetes.default/api/v1/namespaces/jenkins/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dxxxxxxxxxxxxxxxxxx-feature-2fdevops-develop-demo-8-dbt8g--9p398&resourceVersion=70196475&timeoutSeconds=600&watch=true. Message: Unauthorized

Is there anyone who faced this situation ?
I already posted the issue as a bug on jenkins issues site but i was told it’s local scalability issue.

It seems to be a permission issue at the kubernetes level, not the Jenkins level.

We run Jenkins in AWS EKS and I do not remember seeing such error before. Since the permission scheme is AWS specific I cannot help further, sorry.

Thanks Stephane, but we didn’t have this kind of issue until we did an upgrade of the controller and its plugins (including kubernetes plugin), the issue is random and constant. I was told that it might be a local scalibilty issue but i don’t know from where i can start.

You are on the latest version of the plugin which is good. Not on the latest LTS of the controller but I doubt this has anything to do with your issue.

You could look at the changelog of the plugin vs the version you had before and see if there is anything that might be related and then search for a bug report on Jira.

Good luck!

Hi @elyess_mez,
I am facing the same issue since a few days.

io.fabric8.kubernetes.client.KubernetesClientException: Received 401 on websocket. Failure executing: GET at: https://1234abc.gr7.eu-central-1.eks.amazonaws.com/api/v1/namespaces/jenkins-agents/pods?allowWatchBookmarks=true&resourceVersion=1998149&watch=true. Message: Unauthorized.

Do you have any updates on this?
Using:

  • Jenkins controller: 2.462.3
  • kubernetes-plugin: 4295.v7fa_01b_309c95
  • Jnlp container: own based on jenkins/inbound-agent:jdk21
  • Kubernetes: v1.31.2-eks-7f9249a

Checked connection in plugin configuration: Connected to Kubernetes v1.31.2-eks-7f9249a
Rechecked cluster roles and IAM policies and seems fine.

Analysis of issue

When Jenkins is initialized and you automatically configure the Kubernetes plugin, authentication data is cached internally. If the Kubernetes cluster is later recreated with different authentication requirements, Jenkins will continue attempting to use the old, cached credentials, resulting in the error printed in system logs:

io.fabric8.kubernetes.client.KubernetesClientException: Received 401 on websocket. Failure executing: GET at: https://1234abc.gr7.eu-central-1.eks.amazonaws.com/api/v1/namespaces/jenkins-agents/pods?allowWatchBookmarks=true&resourceVersion=1998149&watch=true. Message: Unauthorized.

Even if you update the authentication details in the Kubernetes plugin configuration, Jenkins will still log the same error. Reconfiguring the plugin will not fix it even though it prints Connected to Kubernetes v1.31.2-eks-7f9249a.

Solution

  • To resolve this issue delete the Kubernetes Plugin Configuration: Completely remove the existing Kubernetes configuration entry in Jenkins.
  • Recreate the Kubernetes Plugin Configuration: Add a new Kubernetes plugin entry with the updated authentication details.

This approach ensures that Jenkins uses the correct, fresh authentication data for connecting to the right Kubernetes cluster.