How can i connect to a Kubernetes Workload Cluster IP from an external network via a Google Cloud Classic VPN?

518 views Asked by At

We have a Kubernetes cluster running on GKE, using its own VPC created for this with a subnet of 10.184.0.0/20. This cluster has a workload that has been assigned an external load balancer towards public access, along with an internal cluster IP towards internal communication. The subnet of the services is 10.0.0.0/20.

There is a google cloud Classic VPN setup on the same VPC to be able to access the private network.

We have another system hosted on-premise that is connecting via the above VPN using a tunnel. The on-premise network can ping the Nodes in the VPC via their private IPs on the subnet 10.184.0.0/20, but is enable to ping / telnet to the cluster IP which is on the subnet 10.0.0.0/20.

Is this possible to achieve?

1

There are 1 answers

0
alejandrooc On

This is indeed possible, since your tunnel is already up and you can ping your nodes my guess is that you are unable to reach the pod and services ranges from your on-prem application, meaning that you are only advertising the main 10.184.0.0/20 CIDR but not the secondaries, am I right?

You can easily check that by running a connectivity test, it will simulate traffic between source-destination (in this case source is an IP from your on-prem network and the destination should be your Service IP) taking into consideration several products (firewall rules, VPC peering, routes, VPN tunnels, etc) and will let you know if there is something wrong/missing in your environment.

If you are missing those ranges in your VPN configuration, you will need to re-create it and be sure to add the secondary ranges in the traffic selectors (or use a wide 0.0.0.0/0 CIDR).

Finally, remember that you need to expose your applications using services (Cluster IP, NodePort, Load Balancer) and test again from your on-premises network.