Provide Kubernetes cluster authentication with kubeconfig over https

584 views Asked by At

I have a kubernetes cluster. I created the cluster using the Google Cloud, but not using the GKE, but using GCE. I've created one master node and two worker nodes using VM instances. Kubeadm is used for joining the master and worker nodes along with kube-flannel.yml file. I am exposing my cluster outside in postman using my Vm's public ip & nodePort. I am able to hit to that URL. publicip:nodePort/adapter_name. The hit is reaching my pods and logs are generating. When I used minikube before, I've used port-forwarding to expose my port. Now i am not using that.

There is a default kubeconfig file called config is present in the location $HOME/.kube/config. It have the following content in it.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJ....
    server: https://10.128.0.12:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFe....
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb.....

The server IP is https://10.128.0.12:6443. Can I change this default URL to the one required for authentication[my rest api url]??

My requirement is to provide authentication for my rest api url, that my application enables, while running in the kubernetes pod.

How can I authenticate my rest api url with this kubeconfig method or by creating a new kubeconfig file and using that??

https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/

http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/

I got few ideas from above two blogs and tried to implement that, but none of them is satisfying my requirement. Authentication via postman using any JWT token is also acceptable.

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"} 
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:09:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"} 
2

There are 2 answers

0
Bruce wayne - The Geek Killer On BEST ANSWER

The best method to authenticate our client api/end point url is to use Istio

Istio installation

I documeneted whole process of providing security via Istio in a PDF file which i am attaching here. Istio is used for the verification of the token and Keycloak is used for the generation of the JWT Token.

1
PjoterS On

Posting this as Community Wiki.

I. Accesing Kubernetes API.

Can I change this default URL (cluster server IP address) to the one required for authentication my rest api url??

I wouldn't recommend this. KUBECONFIG files are used to organize information about clusters, users, namespaces, authentication mechanisms and to store information about your connection to the Kubernetes cluster. When you use kubectl to execute commands, it gets the correct communication information from this KUBECONFIG.

In KUBECONFIG you can authenticate using X509 Client Certs or different types of Tokens. More details can be found in Authentication strategies and Access Clusters Using the Kubernetes API

If you are interested how to access kubernetes API using Beare Token, please check this docs.

II. Accesing Client API

If you want to expose your endpoint rest api as public, you could use:

Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.

Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

Note: For bare metal env, consider using use Metallb

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.

Once you expose your api to the outside of the world (if necessary)

As alternative solution, you could consider Keycloak as additional authentication with Gatekeeper in rest api as sidecare which verify if there was authentication.

If you would interested in authentications between microservices you can check Authentication between microservices using Kubernetes identities article.

If you are interested with istio, please take a look at Istio Security Istio provides two types of authentication:

  • Peer authentication: used for service-to-service authentication to verify the client making the connection
  • Request authentication: Used for end-user authentication to verify the credential attached to the request. Istio enables request-level authentication with JSON Web Token (JWT) validation and a streamlined developer experience using a custom authentication provider or any OpenID Connect providers - example