I googled and searched for the answer to my dilemma all answers I could find are not applicable, but they say this has been discussed many times.
Below is my actual cluster setup. 4 worker nodes, two masters, and one load balancer.
I installed the dashboard
XXXX@master01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 30 30h
kube-system coredns-78cb77577b-lbp87 1/1 Running 0 30h
kube-system coredns-78cb77577b-n7rvg 1/1 Running 0 30h
kube-system weave-net-d9jb6 2/2 Running 7 31h
kube-system weave-net-nsqss 2/2 Running 0 39h
kube-system weave-net-wnbq7 2/2 Running 7 31h
kube-system weave-net-zfsmn 2/2 Running 0 39h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-dhcpn 1/1 Running 0 28h
kubernetes-dashboard kubernetes-dashboard-665f4c5ff-6qnzp 1/1 Running 7 28h
I installed my service accounts and assigned them cluster-admin roles
XXXX@master01:~$ kubectl get sa -n kubernetes-dashboard
NAME SECRETS AGE
default 1 28h
kube-apiserver 1 25h
kubernetes-dashboard 1 28h
I am using the kube-apiserver user service account because it was easy to just load the certs in the browser I already have them.
Now I try to access the dashboard using the load balancer: https://loadbalancer.local:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
at this point one would think I should get the dashboard and every question I have encountered makes that assumption but I am getting the following error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "error trying to reach service: dial tcp 10.36.0.1:8443: i/o timeout",
"code": 500
}
so I decided to pull the logs:
kubectl logs -n kubernetes-dashboard service/kubernetes-dashboard
Error from server: Get "https://worker04:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-665f4c5ff-6qnzp/kubernetes-dashboard": x509: certificate signed by unknown
authority
all I get is this one line and I had an idea of finding out what the issue is with the certification from this worker node: worker04:10250 I used OpenSSL to check the certificate and I discovered the following: worker04 has generated its own certificate alright, but it also generated its own CA as well.
and this is where I am with no idea how to fix this and bring up a dashboard. I also tried a proxy on master01:
kubectl -v=9 proxy --port=8001 --address=192.168.1.24
and all I got was 403 Forbidden!
I made some progress with this, I figured out that when a node generate and registers itself to a cluster, it is generating its own certificate CSR signed by its own generated CA, to fix this I generated the certificates for all the nodes signed by the cluster CA and simply replaced the auto generated certificates and restarted the nodes..