Context: I am using EKS with calico plugin for network policies, and a managed node group.
I have a namespace called "simon-test" in which I want to deny all egress from the namespace to others (so pods in simon-test will not be able to see other pods in other namespaces). I tried to do this using the following network policy (which seems to work as expected):
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all-egress
namespace: simon-test
spec:
policyTypes:
- Egress
podSelector: {}
egress: []
but this also blocked all the internal networking within the namespace. So in order to fix this, I created another network policy that is supposed to allow all traffic within the namespace:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internal
namespace: simon-test
spec:
podSelector:
matchLabels: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: simon-test
But this doesn't solve the problem, as there is still no networking within the namespace.
I am curious why I can still reach "simon-test" from another pod in another namespace though (I am running nc -nlvp 9999
in a pod in "simon-test" and nc -z ip-of-pod-in-simon-test-ns 9999
from a pod from another namespace and it can reach it, but when pinging the pod from within simon-test it can't.)
I am not sure exactly about the internals of calico / network policies, but I was able to solve this as follows:
The above ingress rule was not working, because the namespace had no "name=simon-test" label on it. I thought by default all namespaces had a label called "name" that you can reference from here, but it doesn't seem to be the case. To solve this I had to add a label:
kubectl label ns simontest name=simon-test
.For the second problem:
I am curious why I can still reach "simon-test" from another pod in another namespace though (when ingress was blocked)
it was because I was trying to reach "simon-test" ns from a pod in kube-system, which happened to be using hostNetwork enabled, hence the IP address that the pod had assigned, happened to be the IP address of the k8s node, (and not an IP of a pod, apparently network policies can tell apart when an IP is assigned to a pod and a k8s node?) hence not being filtered by the network policy.