How does kubernetes select nodes to add to the load balancers on AWS?

2.1k views Asked by At

Some info:

  • Kubernetes (1.5.1)
  • AWS
  • 1 master and 1 node (both ubuntu 16.04)
  • k8s installed via kubeadm
  • Terraform made by me

Please don't reply use kube-up, kops or similar. This is about understanding how k8s works under the hood. There is by far too much unexplained magic in the system and I want to understand it.

== Question:

When creating a Service of type load balancer on k8s[aws] (for example):

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-addon: kubernetes-dashboard.addons.k8s.io
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    facing: external
spec:
  type: LoadBalancer
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80

I successfully create an internal or external facing ELB but none of the machines are added to the ELB (I can taint the master too but nothing changes). My problem is basically this:

https://github.com/kubernetes/kubernetes/issues/29298#issuecomment-260659722

The subnets and nodes (but not the VPC) are all tagged with "KubernetesCluster" (again... elb are created in the right place). However no nodes is added.

In the logs

kubectl logs kube-controller-manager-ip-x-x-x-x -n kube-system

after:

aws_loadbalancer.go:63] Creating load balancer for 
kube-system/kubernetes-dashboard with name:
acd8acca0c7a111e69ca306f22de69ae

There is no other output (it should print the nodes added or removed). I tried to understand the code at:

https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws_loadbalancer.go But whatever is the reason, this function to not add nodes.

The documentation doesn't go at length trying to explain the "process" behind k8s decisions. To try to understand k8s I tried/used kops, kube up, kubeadm, kubernetes the hard way repo and reading damn code, but still I am unable to understand how k8s on aws SELECTS the node to add to the elb.

As a consequence, also no security group is changed anywhere.

Is it a tag on the ec2? Kublet setting? Anything else?

Any help is greatly appreciated.

Thanks, F.

3

There are 3 answers

0
Ross Fairbanks On

I think the node registration is being managed outside of Kubernetes. I'm using kops and if I edit the size of my ASG in AWS the new nodes are not registered with my service ELBs. But if I edit the number of nodes using kops the new nodes are there.

In the docs a kops instance group maps to an ASG when running on AWS. In the code it looks like its calling AWS rather than a k8s API.

I know you're not using kops but I think in Terraform you need to replicate the AWS API calls that kops is making.

0
mgoodness On

I think Steve is on the right track. Make sure your kubelets, apiserver, and controller-manager components all include --cloud-provider=aws in their arguments lists.

You mention your subnets and instances all have matching KubernetesCluster tags. Do your controller & worker security groups? K8s will modify the worker SG in particular to allow traffic to/from the service ELBs it creates. I tag my VPC as well, though I guess it's not required and may prohibit another cluster from living in the same VPC.

I also tag my private subnets with kubernetes.io/role/internal-elb=true and public ones with kubernetes.io/role/elb=true to identify where internal and public ELBs can be created.

The full list (AFAIK) of tags and annotations lives in https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go

1
Steve Sloka On

Make sure you are setting the correct cloud provider settings with kubeadm (http://kubernetes.io/docs/admin/kubeadm/).

The AWS cloud provider automatically syncs the nodes available with the ELB. I created an type LoadBalancer then scaled my cluster and the new node was eventually added the ELB: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws_loadbalancer.go#L376