Why tiller connect to localhost 8080 for kubernetes api?

19.8k views Asked by At

When use helm for kubernetes package management, after installed the helm client,

after

helm init

I can see tiller pods are running on kubernetes cluster, and then when I run helm ls, it gives an error:

Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labe 
lSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection 
refused

and use kubectl logs I can see similar message like:

[storage/driver] 2017/08/28 08:08:48 list: failed to list: Get 
http://localhost:8080/api/v1/namespaces/kube-system/configmaps?
labelSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection 
refused

I can see the tiller pod is running at one of the node instead of master, there is no api server running on that node, why it connects to 127.0.0.1 instead of my master ip?

4

There are 4 answers

0
Tux On

So I was having this problem since a couple weeks on my work station and none of the answers provided (here or in Github) worked for me.

What it has worked is this:

sudo kubectl proxy --kubeconfig ~/.kube/config --port 80

Notice that I am using port 80, so I needed to use sudo to be able to bing the proxy there, but if you are using 8080 you won't need that.

Be careful with this because the kubeconfig file that the command above is pointing to is in /root/.kube/config instead than in your usual $HOME. You can either use an absolute path to point to the config you want to use or create one in root's home (or use this sudo flag to preserve your original HOME env var --preserve-env=HOME).

Now if you are using helm by itself I guess this is it. To get my setup working, as I am using Helm through the Terraform provider on GKE this was a pain in the ass to debug as the message I was getting doesn't even mention Helm and it's returned by Terraform when planning. For anybody that may be in a similar situation:

The errors when doing a plan/apply operation in Terraform in any cluster with Helm releases in the state:

Error: error installing: Post "http://localhost/apis/apps/v1/namespaces/kube-system/deployments": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/system/secrets/apigee-secrets": dial tcp [::1]:80: connect: connection refused

One of these errors for every helm release in the cluster or something like that. In this case for a GKE cluster I had to ensure that I had the env var GOOGLE_APPLICATION_CREDENTIALS pointing to the key file with valid credentials (application-default unless you are not using the default set up for application auth) :

  gcloud auth application-default login 
  export GOOGLE_APPLICATION_CREDENTIALS=/home/$USER/.config/gcloud/application_default_credentials.json

With the kube proxy in place and the correct credentials I am able again to use Terraform (and Helm) as usual. I hope this is helpful for anybody experiencing this.

2
Jose Peinado On

Run this before doing helm init. It worked for me.

kubectl config view --raw > ~/.kube/config
0
fishuke On
kubectl config view --raw > ~/.kube/config    
export KUBECONFIG=~/.kube/config

worked for me

2
snehal On

First delete tiller deployment and stop the tiller service.By running below commands,

kubectl delete deployment tiller-deploy --namespace=kube-system
kubectl delete service tiller-deploy --namespace=kube-system
rm -rf $HOME/.helm/

By default, helm init installs the Tiller pod into the kube-system namespace, with Tiller configured to use the default service account. Configure Tiller with cluster-admin access with the following command:

kubectl create clusterrolebinding tiller-cluster-admin \
    --clusterrole=cluster-admin \
    --serviceaccount=kube-system:default

Then install helm server (Tiller) with the following command:

helm init