I am getting errors using checkout scm
in a pipeline, because of two issues.
The setup:
- Private Kubernetes cluster - 1 controller, 2 workers, on Ubuntu 20.04 VMs
- Jenkins running in Kubernetes pods
- Kubernetes plug-in to instantiate Jenkins build agents
- Private GIT server on the controller VM outside of the cluster, ssh access
- ssh private key for GIT configured in Jenkins credentials
- Jenkins project 'hello' configured to use this private GIT and associated ssh key
- Jenkinsfile (pipeline) to build
I want to use a simple checkout scm
step in the Jenkinsfile.
Problem 1 The build fails with Host key verification failed.
because the Kubernetes agent pod doesn't have the GIT server in its known_hosts
.
Problem 2 If I force the controller cert into known_hosts
(for example, hard-code an echo into Jenkinsfile, and then add a git ls-remote
step), it fails with Permission denied
because the configured ssh private key is not present in the agent pod.
I've found a workaround for both of these:
podTemplate(
...
{
node(POD_LABEL) {
stage('Checkout') {
withCredentials([sshUserPrivateKey(
credentialsId: 'private_git',
keyFileVariable: 'PRIVATE_GIT_KEY',
passphraseVariable: '',
usernameVariable: ''
)]) {
sh 'mkdir -p ~/.ssh'
sh 'cp $PRIVATE_GIT_KEY ~/.ssh/id_rsa'
sh '/usr/bin/ssh-keyscan -t rsa kube-master.cluster.dev >> ~/.ssh/known_hosts'
sh 'chown -R $USER:$USER ~/.ssh'
sh '/usr/bin/git ls-remote ssh://[email protected]:/git/hello.git'
}
checkout scm
}
...
}
}
What do I need to avoid this workaround and just use checkout scm
like it is intended?
Example failure log:
Running on build-pod-xdh86-53wh7 in /home/jenkins/agent/workspace/hello
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Checkout)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
The recommended git tool is: NONE
using credential private_git
Cloning the remote Git repository
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --force --progress -- ssh://[email protected]/git/hello.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
A reasonable solution is to place the keys in Kubernetes secrets, and mount the secrets in the Jenkins pod.
On your controller machine:
Make a scratch user account and shell into it
Make the secret (typical ssh-keygen)
Add
id_rsa.pub
to the git serverauthorized_users
Connect to the git ssh once to produce
known_hosts
, for examplegit ls-remote ssh://[email protected]:/git/hello.git
Copy the scratch user's
~/.ssh/id_rsa
private key and~/.ssh/known_hosts
files to a place where kubectl can read them, such as/tmp/scratchuser
Exit the scratch user account and delete it
sudo chown -R $USER:$USER /tmp/scratchuser/
Add the
id_rsa
andknown_hosts
to Kubernetes, with a command likekubectl create secret -n jenkins generic private-git --from-file=id_rsa=/tmp/scratchuser/.ssh/id_rsa --from-file=known_hosts=/tmp/scratchuser/.ssh/known_hosts
Deploy Jenkins with a yaml containing some specific things:
The key points in the above yaml are
fsGroup
so that pod userjenkins
can access the mounted secret volume, theprivate-git-vol
mount to place the secret files into the.ssh
path, and theprivate-git-vol
definition that refers to the secret created withkubectl
above.One more hint. For a
Jenkinsfile
that instantiates build agent pods, see Declarative Pipeline. You might need to abandonpodTemplate()
and specify the agent pod yaml completely:and in
KubernetesPod.yaml
, includejnlp
container (jenkins/inbound-agent
image) with your own yaml instead of the Kubernetes plug-in. This will allow you to usefsGroup
in the build agent, like described above for the Jenkins master.