Jenkins controller randomly losing TCP inbound agent fixed port 50000

Hi,

Our Jenkins environment is a weird issue where the Jenkins controllers (running in k8s) are randomly losing the TCP inbound agent port 50000. Losing, as in, they are no longer listening and cannot be observed when I run

kubectl exec -it <controller pod> bash
netstat -pant | grep 50000

There are multiple controller instances in our environment and only a few see this issue (random ones). There’s no logs in the controller to troubleshoot further

Can someone help point me to the docs for TCP inbound agent listener? I am looking to increase the log level on the Jenkins controller for the listener port (fixed at 50000 right now)

I couldn’t find anything in the docs right away. Most Jenkins docs point to use Websockets (which we have started to evaluate)

There’s no open Jenkins issue in JIRA as well. Before I open one, I was hoping to collect more information.

Restarting the Jenkins sever (underlying k8s pod) or manually going into /configureSecurity to disable the TCP inbound port and enabling the fixed port back resolves the issue.

1 Like

We have major problems with that same issue. I see that the web port 8080 is still in LISTEN mode, but we can’t access the web-interface either so I have not been able to test your workaround with disabling the port and enabling again.

I’m also looking for solution for this problem but can tell you how (manually) fix this situation (ad-hoc).
It is enough you go to jenkins settings page and change port from 50000 to 50001, save, anad again change 50001 to 50000, save.
It looks it case kind of reset of listener thread.

1 Like

@kobetsu When this help you are my super-hero. I will try it, when it stuck next time.

Because we have same issue and it is more or less critical for us, I create the Jira issue [JENKINS-70161] Blocked JNLP port - Jenkins Jira. We will see, if can somebody fixed it.

1 Like