Jenkins controller randomly losing TCP inbound agent fixed port 50000


Our Jenkins environment is a weird issue where the Jenkins controllers (running in k8s) are randomly losing the TCP inbound agent port 50000. Losing, as in, they are no longer listening and cannot be observed when I run

kubectl exec -it <controller pod> bash
netstat -pant | grep 50000

There are multiple controller instances in our environment and only a few see this issue (random ones). There’s no logs in the controller to troubleshoot further

Can someone help point me to the docs for TCP inbound agent listener? I am looking to increase the log level on the Jenkins controller for the listener port (fixed at 50000 right now)

I couldn’t find anything in the docs right away. Most Jenkins docs point to use Websockets (which we have started to evaluate)

There’s no open Jenkins issue in JIRA as well. Before I open one, I was hoping to collect more information.

Restarting the Jenkins sever (underlying k8s pod) or manually going into /configureSecurity to disable the TCP inbound port and enabling the fixed port back resolves the issue.


We have major problems with that same issue. I see that the web port 8080 is still in LISTEN mode, but we can’t access the web-interface either so I have not been able to test your workaround with disabling the port and enabling again.