aws-iam-authenticator daemon set not running

250 views Asked by At

I'm trying to setup the aws-iam-authenticator container on AWS EKS, but I've been stuck for hours trying to get the daemon started. I'm following the instructions found on the aws-iam-authenticator repo, and I'm using the deploy/example.yml as my reference starting point. I've already modified the roles, clusterID, and another required components but still no luck after applying the deployment.

I just enabled logging for the controller-master so I hope there may be some further details in there. I also came across a post where folks mentioned restarting the controller nodes, but I haven't found a way to do it using EKS yet.

If anyone has quick tips or other places to check, I'd greatly appreciate it :)

$ kubectl get ds -n kube-system
    NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
    aws-iam-authenticator   0         0         0       0            0           node-role.kubernetes.io/master=   8h
    aws-node                3         3         3       3            3           <none>                            3d22h
    kube-proxy              3         3         3       3            3           <none>                            3d22h

Additional outputs

$ kubectl get ds aws-iam-authenticator -n kube-system --output=yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"k8s-app":"aws-iam-authenticator"},"name":"aws-iam-authenticator","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"aws-iam-authenticator"}},"template":{"metadata":{"annotations":{"scheduler.alpha.kubernetes.io/critical-pod":""},"labels":{"k8s-app":"aws-iam-authenticator"}},"spec":{"containers":[{"args":["server","--config=/etc/aws-iam-authenticator/config.yaml","--state-dir=/var/aws-iam-authenticator","--generate-kubeconfig=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml"],"image":"602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.4.0","name":"aws-iam-authenticator","resources":{"limits":{"cpu":"100m","memory":"20Mi"},"requests":{"cpu":"10m","memory":"20Mi"}},"volumeMounts":[{"mountPath":"/etc/aws-iam-authenticator/","name":"config"},{"mountPath":"/var/aws-iam-authenticator/","name":"state"},{"mountPath":"/etc/kubernetes/aws-iam-authenticator/","name":"output"}]}],"hostNetwork":true,"nodeSelector":{"node-role.kubernetes.io/master":""},"serviceAccountName":"aws-iam-authenticator","tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"key":"CriticalAddonsOnly","operator":"Exists"}],"volumes":[{"configMap":{"name":"aws-iam-authenticator"},"name":"config"},{"hostPath":{"path":"/etc/kubernetes/aws-iam-authenticator/"},"name":"output"},{"hostPath":{"path":"/var/aws-iam-authenticator/"},"name":"state"}]}},"updateStrategy":{"type":"RollingUpdate"}}}
  creationTimestamp: "2020-03-24T06:47:54Z"
  generation: 4
  labels:
    k8s-app: aws-iam-authenticator
  name: aws-iam-authenticator
  namespace: kube-system
  resourceVersion: "601895"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/daemonsets/aws-iam-authenticator
  uid: 63e8985a-54cc-49a8-b343-3e20b4d9eaff
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: aws-iam-authenticator
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: null
      labels:
        k8s-app: aws-iam-authenticator
    spec:
      containers:
      - args:
        - server
        - --config=/etc/aws-iam-authenticator/config.yaml
        - --state-dir=/var/aws-iam-authenticator
        - --generate-kubeconfig=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml
        image: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon/aws-iam-authenticator:v0.4.0
        imagePullPolicy: IfNotPresent
        name: aws-iam-authenticator
        resources:
          limits:
            cpu: 100m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/aws-iam-authenticator/
          name: config
        - mountPath: /var/aws-iam-authenticator/
          name: state
        - mountPath: /etc/kubernetes/aws-iam-authenticator/
          name: output
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        node-role.kubernetes.io/master: ""
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: aws-iam-authenticator
      serviceAccountName: aws-iam-authenticator
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists
      volumes:
      - configMap:
          defaultMode: 420
          name: aws-iam-authenticator
        name: config
      - hostPath:
          path: /etc/kubernetes/aws-iam-authenticator/
          type: ""
        name: output
      - hostPath:
          path: /var/aws-iam-authenticator/
          type: ""
        name: state
  templateGeneration: 4
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 0
  desiredNumberScheduled: 0
  numberMisscheduled: 0
  numberReady: 0
  observedGeneration: 4
1

There are 1 answers

0
Arian Motamedi On BEST ANSWER

The issue has to do with the nodeSelector field. According to k8s docs for label selectors, empty string does not always denote a wildcard and behavior depends on the implementation of that specific API:

The semantics of empty or non-specified selectors are dependent on the context, and API types that use selectors should document the validity and meaning of them.

I'm not seeing the empty behavior for DaemonSet's nodeSelector in its official docs, but this GCE example specifically says to omit the nodeSelector field to schedule on all nodes, which you confirmed worked in your case as well.