After I ran git clone xxxx.git, typed yes and saved the known host inside the container and the host, it could pull the code but still gives me this notification.
âis it the Built-In Node?â is probably a better description
In this case it looks like no, its 33.168
I canât cite anything because you used a screenshot instead of code, but the bluetext in your screenshot tells you whats going on.
(agent or Controller, but i donât think agent is assigned yet) first tries to get branches/tags/etc from the remote, but known_hosts (either $HOME/.ssh/known-hosts or /etc/ssh/known_hosts) doesnât exist on the controller.
You either want to have that file populated properly (ssh-keyscan will work), or as the error message says, setup host keys globally.
Same here I think the blue text part had not reached the agent part and it happened on the mast node( or controller). So I tried to get the ssh keys and known hosts ready both in the host and container.
Which now you can see I could clone/pull the code locally:
OH! I forgot. Not sure about the user part but I remembered it run as the jenkins user? So do I need to create a user named âjenkinsâ and then set up the known host and ssh key then give it a try?
I tried to assign the controllerâs label as master, and run âwhoamiâ.
this is the output:
Started by user roger Replayed #63 [Pipeline] Start of Pipeline [Pipeline] node Running on Jenkins in /var/jenkins_home/workspace/simulator-setup [Pipeline] { [Pipeline] sh + whoami root [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: SUCCESS
I again suggest setting up the known hosts config in your global manage jenkins settings, instead of depending on whatever your agent is setup. Makes it easier to add and remove agents.
Running the Jenkins controller as the root user is a disaster waiting to happen. Donât do that. Run the Jenkins controller as a normal user or as a service user so that your system cannot be destroyed by a mistake in a Jenkins job.
Donât allow jobs to run on the Jenkins controller. That is also a disaster waiting to happen.
The combination of allowing jobs on the controller and running as the root user means that if a job made the mistake of running rm -rf / your system would be destroyed.
There are a few different solutions to the message about host keys.
If youâre running on an operating system with OpenSSH 7.6 or newer (FreeBSD, macOS, any supported Linux except CentOS 7), then you can configure the git plugin security settings to âAccept first connectionâ strategy for the âGit Host Key Verification Configurationâ in the âConfigure Global Securityâ settings.
If youâre running CentOS 7 or one of its derivatives, I recommend an operating system upgrade. If thatâs not possible, then see the git client plugin documentation for other alternatives.
If you want to define the contents of your own known_hosts file, you are welcome to do that as well. Here are the host keys for some common git providers: