Google Kubernetes Engine: NetworkPolicy allowing egress to k8s-metadata-proxy

1k views Asked by At

Context

I have a Google Kubernetes Engine (GKE) cluster with Workload Identity enabled. As part of Workload Identity, a k8s-metadata-proxy DaemonSet runs on the cluster. I have a namespace my-namespace and want to deny all egress traffic of pods in the namespace except egress to the k8s-metadata-proxy DaemonSet. As such I have the following NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: my-namespace
spec:
  # Apply to all pods.
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - ports:
    # This is needed to whitelist k8s-metadata-proxy. See https://github.com/GoogleCloudPlatform/k8s-metadata-proxy
    - protocol: TCP
      port: 988

Problem

The NetworkPolicy is too broad because it allows egress TCP traffic to any host on port 988 instead of just egress to the k8s-metadata-proxy DaemonSet, but I can't seem to find a way to specify the .spec.egress[0].to to achieve the granularity I want.

I have tried the following tos:

  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          namespace: kube-system
    ports:
    - protocol: TCP
      port: 988
  - to:
    - ipBlock:
        cidr: <cidr of pod IP range>
    - ipBlock:
        cidr: <cidr of services IP range>
    ports:
    - protocol: TCP
      port: 988

but these rules result in traffic to the k8s-metadata-proxy being blocked.

Question

How can I select the k8s-metadata-proxy DaemonSet in the to part of an egress rule in a networking.k8s.io/v1/NetworkPolicy?

1

There are 1 answers

2
Dawid Kruk On BEST ANSWER

As I said in the comment:

Hello. You can add to your Egress definition podSelector.matchLabels to allow your pod to connect only to the Pods with specific label. You can read more about it here: cloud.google.com/kubernetes-engine/docs/tutorials/…

This comment could be misleading as the communication with gke-metadata-server is described in the official documentation:

Focusing on the part of above documentation:

Understanding the GKE metadata server

The GKE metadata server is a new metadata server designed for use with Kubernetes. It runs as a daemonset , with one Pod on each cluster node. The metadata server intercepts HTTP requests to http://metadata.google.internal (169.254.169.254:80), including requests like GET /computeMetadata/v1/instance/service-accounts/default/token to retrieve a token for the Google service account the Pod is configured to act as. Traffic to the metadata server never leaves the VM instance that hosts the Pod.

Note: If you have a strict cluster network policy in place, you must allow egress to 127.0.0.1/32 on port 988 so your Pod can communicate with the GKE metadata server.

The rule to allow traffic only to GKE Metadata server is described in the last paragraph of above citation. The YAML definition should look like below:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: egress-rule
  namespace: restricted-namespace # <- namespace your pod is in 
spec:
  policyTypes:
  - Egress
  podSelector:
    matchLabels:
      app: nginx # <- label used by pods trying to communicate with metadata server
  egress:
  - to:
    - ipBlock:
        cidr: 127.0.0.1/32 # <- allow communication with metadata server #1 
  - ports:
    - protocol: TCP
      port: 988 # <- allow communication with metadata server #2 

Assuming that:

  • You have a Kubernetes cluster with:
    • Network Policy enabled
    • Workload Identity enabled
  • Your Pods are trying to communicate from restricted-namespace namespace

The output for describing needed NetworkPolicy:

  • $ kubectl describe networkpolicy -n restricted-namespace egress-rule
Name:         egress-rule
Namespace:    restricted-namespace
Created on:   2020-10-04 18:31:10 +0200 CEST
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"name":"egress-rule","namespace":"restricted-name...
Spec:
  PodSelector:     app=nginx
  Allowing ingress traffic:
    <none> (Selected pods are isolated for ingress connectivity)
  Allowing egress traffic:
    To Port: <any> (traffic allowed to all ports)
    To:
      IPBlock:
        CIDR: 127.0.0.1/32
        Except: 
    ----------
    To Port: 988/TCP
    To: <any> (traffic not restricted by source)
  Policy Types: Egress

Disclaimer!

Applying those rules will deny all the traffic from pods with app=nginx label not destined to the metadata server!

You can create and exec into the pod with a label app=nginx by:

kubectl run -it --rm nginx \
--image=nginx \
--labels="app=nginx" \
--namespace=restricted-namespace \
-- /bin/bash

Tip!

Image nginx is used as it's having curl installed by default!

By this example you won't be able to communicate with DNS server. You can either:

  • allow your pods to communicate with DNS server
  • set the env variable for metadata server (169.254.169.254)

Example of communicating with GKE Metadata Server:

  • $ curl 169.254.169.254/computeMetadata/v1/instance/ -H 'Metadata-Flavor: Google'
attributes/
hostname
id
service-accounts/
zone

Additional resources:



To allow specific pods to send traffic only to the specific pods on specific ports you can use following policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: egress-rule
  namespace: restricted-namespace # <- namespace of "source" pod
spec:
  policyTypes:
  - Egress
  podSelector:
    matchLabels:
      app: ubuntu # <- label for "source" pod
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: nginx # <- label for "destination" pod
  - ports:
    - protocol: TCP
      port: 80 # <- allow only port 80