I am running etcd
, kube-apiserver
, kube-scheduler
, and kube-controllermanager
on a master node as well as kubelet
and kube-proxy
on a minion node as follows (all kube binaries are from kubernetes 1.7.4):
# [master node]
./etcd
./kube-apiserver --logtostderr=true --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.10.10.0/24 --insecure-port 8080 --secure-port=0 --allow-privileged=true --insecure-bind-address 0.0.0.0
./kube-scheduler --address=0.0.0.0 --master=http://127.0.0.1:8080
./kube-controller-manager --address=0.0.0.0 --master=http://127.0.0.1:8080
# [minion node]
./kubelet --logtostderr=true --address=0.0.0.0 --api_servers=http://$MASTER_IP:8080 --allow-privileged=true
./kube-proxy --master=http://$MASTER_IP:8080
After this, if I execute kubectl get all --all-namespaces
and kubectl get nodes
, I get
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default svc/kubernetes 10.10.10.1 <none> 443/TCP 27m
NAME STATUS AGE VERSION
minion-1 Ready 27m v1.7.4+793658f2d7ca7
Then, I apply flannel as follows:
kubectl apply -f kube-flannel-rbac.yml -f kube-flannel.yml
Now, I see a pod is created, but with error:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-p8tcb 1/2 CrashLoopBackOff 4 2m
When I check the logs inside the failed container in the minion node, I see the following error:
Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
My question is: how to resolve this? Is this a SSL issue? What step am I missing in setting up my cluster?
You could try to pass --etcd-prefix=/your/prefix and --etcd-endpoints=address to flanneld instead of --kube-subnet-mgr so flannel get net-conf from etcd server and not from api server.
Keep in mind that you must to push net-conf to etcd server.
UPDATE
The problem (
/var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
) can appear when execute apiserver without --admission-control=...,ServiceAccount,... or if kubelet is inside a container (eg: hypercube) and this last was my case. If you want execute k8s components inside a container you need to pass 'shared' option to kubelet volumeFurthermore enable same option to docker in docker.service
Now the question is: is there a security hole with shared mount?