GKE cluster egress traffic coming out the nodes rather than the LB service

1.5k views Asked by At

I'm new to GKE and K8S so please bare with me and my silliness. I currently have a GKE cluster that has two nodes in the default node pool and the cluster is exposed via a LoadBalancer type service.

These nodes are tasked with calling a Compute Engine instance via HTTP. I have a Firewall rule set in GCP to deny ingress traffic to the GCE instance except the one coming from the GKE cluster.

The issue is that the traffic isn't coming from the LoadBalancer's service IP but rather from the nodes themselves, so whitelisting the services' IP has no effect and I have to whitelist the IPs of the nodes instead of the cluster. This is not ideal, since each time a new node is created I have to change the Firewall rule. I understand that once you have a service set up in the cluster, all traffic will be directed towards the IP of the service, so why is this happening? What am I doing wrong? Please let me know if you need more details and thanks in advance.

YAML of the service:

https://i.stack.imgur.com/XBZmE.png

1

There are 1 answers

3
guillaume blaquiere On BEST ANSWER

When you create a service on GKE, and you expose it to internet, a load balancer is created. This load balancer manage only the ingress traffic (traffic from internet to your GKE cluster).

When your pod initiate a communication, the traffic is not managed by the load balancer, but by the node that host the pod, if the node have a public IP (Instead of denied the traffic to GCE instance, simply remove the public IP, it's easier and safer!).

If you want to manage the IP for egress traffic originated by your pod, you have to set up a Cloud NAT on your GKE cluster.