Flannel fails in kubernetes cluster due to failure of subnet manager

1.9k views Asked by At

I am running etcd, kube-apiserver, kube-scheduler, and kube-controllermanager on a master node as well as kubelet and kube-proxy on a minion node as follows (all kube binaries are from kubernetes 1.7.4):

# [master node]
./etcd
./kube-apiserver --logtostderr=true --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.10.10.0/24 --insecure-port 8080 --secure-port=0 --allow-privileged=true --insecure-bind-address 0.0.0.0
./kube-scheduler --address=0.0.0.0 --master=http://127.0.0.1:8080
./kube-controller-manager --address=0.0.0.0 --master=http://127.0.0.1:8080

# [minion node]
./kubelet --logtostderr=true --address=0.0.0.0 --api_servers=http://$MASTER_IP:8080 --allow-privileged=true
./kube-proxy --master=http://$MASTER_IP:8080

After this, if I execute kubectl get all --all-namespaces and kubectl get nodes, I get

NAMESPACE   NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     svc/kubernetes   10.10.10.1   <none>        443/TCP   27m

NAME       STATUS    AGE       VERSION
minion-1   Ready     27m       v1.7.4+793658f2d7ca7

Then, I apply flannel as follows:

kubectl apply -f kube-flannel-rbac.yml -f kube-flannel.yml

Now, I see a pod is created, but with error:

NAMESPACE     NAME                    READY     STATUS             RESTARTS   AGE
kube-system   kube-flannel-ds-p8tcb   1/2       CrashLoopBackOff   4          2m

When I check the logs inside the failed container in the minion node, I see the following error:

Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

My question is: how to resolve this? Is this a SSL issue? What step am I missing in setting up my cluster?

2

There are 2 answers

0
Six On

You could try to pass --etcd-prefix=/your/prefix and --etcd-endpoints=address to flanneld instead of --kube-subnet-mgr so flannel get net-conf from etcd server and not from api server.

Keep in mind that you must to push net-conf to etcd server.

UPDATE

The problem (/var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory) can appear when execute apiserver without --admission-control=...,ServiceAccount,... or if kubelet is inside a container (eg: hypercube) and this last was my case. If you want execute k8s components inside a container you need to pass 'shared' option to kubelet volume

/var/lib/kubelet/:/var/lib/kubelet:rw,shared

Furthermore enable same option to docker in docker.service

MountFlags=shared

Now the question is: is there a security hole with shared mount?

0
sam On

Maybe it is your flannel yaml file has something wrong, you can try this to install your flannel, check the old ip link

ip link

if it show flannel,please delete it

ip link delete flannel.1

and install , its default pod network cdir is 10.244.0.0/16

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml