Jenkins kubernetes agent checkout scm not using configured key

2.2k views Asked by At

I am getting errors using checkout scm in a pipeline, because of two issues.

The setup:

  • Private Kubernetes cluster - 1 controller, 2 workers, on Ubuntu 20.04 VMs
  • Jenkins running in Kubernetes pods
  • Kubernetes plug-in to instantiate Jenkins build agents
  • Private GIT server on the controller VM outside of the cluster, ssh access
  • ssh private key for GIT configured in Jenkins credentials
  • Jenkins project 'hello' configured to use this private GIT and associated ssh key
  • Jenkinsfile (pipeline) to build

I want to use a simple checkout scm step in the Jenkinsfile.

Problem 1 The build fails with Host key verification failed. because the Kubernetes agent pod doesn't have the GIT server in its known_hosts.

Problem 2 If I force the controller cert into known_hosts (for example, hard-code an echo into Jenkinsfile, and then add a git ls-remote step), it fails with Permission denied because the configured ssh private key is not present in the agent pod.

I've found a workaround for both of these:

podTemplate(
...
{
  node(POD_LABEL) {
    stage('Checkout') {
      withCredentials([sshUserPrivateKey(
          credentialsId: 'private_git', 
          keyFileVariable: 'PRIVATE_GIT_KEY',
          passphraseVariable: '',
          usernameVariable: ''
      )]) {
        sh 'mkdir -p ~/.ssh'
        sh 'cp $PRIVATE_GIT_KEY ~/.ssh/id_rsa'
        sh '/usr/bin/ssh-keyscan -t rsa kube-master.cluster.dev >> ~/.ssh/known_hosts'
        sh 'chown -R $USER:$USER ~/.ssh'
        sh '/usr/bin/git ls-remote ssh://[email protected]:/git/hello.git'
      }
      checkout scm
    }
  ...
  }
}

What do I need to avoid this workaround and just use checkout scm like it is intended?

Example failure log:

Running on build-pod-xdh86-53wh7 in /home/jenkins/agent/workspace/hello
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Checkout)
[Pipeline] checkout
Selected Git installation does not exist. Using Default
The recommended git tool is: NONE
using credential private_git
Cloning the remote Git repository
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --force --progress -- ssh://[email protected]/git/hello.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
1

There are 1 answers

2
jws On

A reasonable solution is to place the keys in Kubernetes secrets, and mount the secrets in the Jenkins pod.

On your controller machine:

  1. Make a scratch user account and shell into it

  2. Make the secret (typical ssh-keygen)

  3. Add id_rsa.pub to the git server authorized_users

  4. Connect to the git ssh once to produce known_hosts, for example

    git ls-remote ssh://[email protected]:/git/hello.git

  5. Copy the scratch user's ~/.ssh/id_rsa private key and ~/.ssh/known_hosts files to a place where kubectl can read them, such as /tmp/scratchuser

  6. Exit the scratch user account and delete it

  7. sudo chown -R $USER:$USER /tmp/scratchuser/

  8. Add the id_rsa and known_hosts to Kubernetes, with a command like

    kubectl create secret -n jenkins generic private-git --from-file=id_rsa=/tmp/scratchuser/.ssh/id_rsa --from-file=known_hosts=/tmp/scratchuser/.ssh/known_hosts

  9. Deploy Jenkins with a yaml containing some specific things:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: jenkins
      namespace: jenkins
    spec:
      ... your other options such as replicas, selector, etc ...
      template:
        metadata: ... your metadata section ...
        spec:
          securityContext:
            fsGroup: 1000
          containers:
          - name: jenkins
            image: jenkins/jenkins:lts
            ports:
              ... standard jenkins ports ...
            volumeMounts:
              - name: jenkins-vol
                mountPath: /var/jenkins_home
              - name: private-git-vol
                mountPath: "/var/jenkins_home/.ssh"
                readOnly: true
          volumes:
            - name: jenkins-vol
              ... your persistent volume details ...
            - name: private-git-vol
              secret:
                secretName: private-git
                defaultMode: 0600
          ... your other options such as dnsPolicy, etc. ...
    

The key points in the above yaml are fsGroup so that pod user jenkins can access the mounted secret volume, the private-git-vol mount to place the secret files into the .ssh path, and the private-git-vol definition that refers to the secret created with kubectl above.

One more hint. For a Jenkinsfile that instantiates build agent pods, see Declarative Pipeline. You might need to abandon podTemplate() and specify the agent pod yaml completely:

pipeline {
  agent {
    kubernetes {
      yamlFile 'KubernetesPod.yaml'
    }
  ... your build steps ...
}

and in KubernetesPod.yaml, include jnlp container (jenkins/inbound-agent image) with your own yaml instead of the Kubernetes plug-in. This will allow you to use fsGroup in the build agent, like described above for the Jenkins master.