We're using Gitlab Runner with Kubernetes executor and we were thinking about what I think is currently not possible. We want to assign the Gitlab Runner daemon's pod to a specific node group's worker with instance type X and the jobs' pods to a different node group Y worker nodes as these usually require more computation resources than the Gitlab Runner's pod.

This comes in order to save costs, as the node where the Gitlab runner main daemon will always be running, then we want it to be running on a cheap instance, and later the jobs which need more computation capacity then they can run on different instances with different type and which will be started by the Cluster Autoscaler and later destroyed when no jobs are present.

I made an investigation about this feature, and the available way to assign the pods to specific nodes is to use the node selector or node affinity, but the rules included in these two configuration sections are applied to all the pods of the Gitlab Runner, the main pod and the jobs pods. The proposal is to make it possible to apply two separate configurations, one for the Gitlab Runner's pod and one for the jobs' pods.

The current existing config consists of the node selector and nodes/pods affinity, but as I mentioned these apply globally to all the pods and not to specified ones as we want in our case.

Gitlab Runner Kubernetes Executor Config: https://docs.gitlab.com/runner/executors/kubernetes.html

3

There are 3 answers

0
Rshad Zhran On BEST ANSWER

This problem is solved! After a further investigation I found that Gitlab Runner's Helm chart provide 2 nodeSelector features, to exactly do what I was looking for, 1 for the main pod which represents the Gitlab Runner pod and the other one for the Gitlab Runner's jobs pods. Below I show a sample of the Helm chart in which I set beside each nodeSelector its domain and the pod that it affects.

Note that the first level nodeSelector is the one that affects the main Gitlab Runner pod, and the runners.kubernetes.node_selector is the one that affects the Gitlab Runner's jobs pods.

gitlabUrl: https://gitlab.com/
...
nodeSelector:
  gitlab-runner-label-example: label-values-example-0
...

runnerRegistrationToken: ****
...
runners:
  config: 
    [[runners]]
        name = "gitlabRunnerExample"
        executor = "kubernetes"
        environment = ["FF_USE_LEGACY_KUBERNETES_EXECUTION_STRATEGY=true"]
        
        [runners.kubernetes]
            ...
        [runners.kubernetes.node_selector]
            "gitlab-runner-label-example" = "label-values-example-1"

        [runners.cache]
            ...
            [runners.cache.s3]
                ...
...

2
Jean-Pascal J. On

using the helm chart, there is an additional configuration part where you can specify, well, additional configuration

one of the them is especially the node selector for jobs pods and another one for toleration.

The combination of that and some namespace level config should allow you to run the 2 kinds of pod on different node types

0
Zhakyp Zhoomart uulu On

I am also facing issues with our GitLab Runner as it running on different nodes, which don't all have the necessary permissions. To solve this, I added node affinity to the Runner's configuration. This ensures that it always runs on a specific node, the one we've given the right permissions to. Since making this change, our pipelines have been running smoothly without any access-related issues.

Note: Replace the value in the configuration with the hostname of your own nodes.

ip-0.0.0.0.ec2.internal

Use this command to view node labels : kubectl get nodes --show-labels

Note: For more information, take a look at this documentation.

config: |
  [[runners]]
    [runners.kubernetes]
      namespace = "eldiar-efs-test"
      image = "alpine"
      [runners.cache]
        Type = "efs"
        Path = "/mnt/efs"
        Shared = true
      [[runners.kubernetes.volumes.pvc]]
        name = "nfs-claim"
        mount_path = "/mnt/efs"
      [runners.kubernetes.affinity]
        [runners.kubernetes.affinity.node_affinity]
          [runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution]
            [[runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution.node_selector_terms]]
              [[runners.kubernetes.affinity.node_affinity.required_during_scheduling_ignored_during_execution.node_selector_terms.match_expressions]]
                key = "kubernetes.io/hostname"
                operator = "In"
                values = [
                  "ip-0.0.0.0.ec2.internal"
                ]

Additionally, Create ClusterRole and ClusterRoleBinding

To provide the necessary permissions for nodes, you can create a ClusterRole and ClusterRoleBinding. However, since these were created locally and are not part of the GitLab CI pipeline, ensure they are applied to your cluster before deploying the GitLab Runner.

cluster-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  namespace: gitlab-runner
  name: gitlab-pipeline-role
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["get", "list", "create", "patch", “update”]

In the cluster-role-binding.yaml, the node name is system:node:ip-0.0.0.0.us-west-2.compute.internal. This ClusterRoleBinding binds the ClusterRole gitlab-pipeline-role to the specified node, granting it the associated permissions within the gitlab-runner namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: CluasterRoleBinding
metadata:
  name: gitlab-node-role-binding
  namespace: gitlab-runner
subjects:
- kind: User
  name: system:node:ip-0.0.0.0.us-west-2.compute.internal
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: gitlab-pipeline-role
  apiGroup: rbac.authorization.k8s.io