GKE Kubernetes network policy allowing other node IPs

239 views Asked by At

I have a GKE cluster (1.16) with 2+ nodes and a GKE Ingress HTTPS load balancer.
I'm deploying several namespaces on it.
I want to deny all traffic between namespaces, so I'm using the recipe found here.
However, according to this documentation (my externalTrafficPolicy is using the default value of Cluster):

If externalTrafficPolicy is not set to Local, the network policy must also allow connections from other node IPs in the cluster.

How to allow connections from other node IPs in the cluster in my NetworkPolicy definition?
My current definition is:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: foo
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}
    - ipBlock:
        cidr: 35.191.0.0/16
    - ipBlock:
        cidr: 130.211.0.0/22
1

There are 1 answers

1
Peter On BEST ANSWER

GKE nodes communicate over a private address space, so you can allow 10.0.0.0/8 (or be more specific).