This works fine. But trying to use a selector to specify that only the destination of kube-dns pods should be allowed fails miserably.
At first I thought it would even work with a namespaced network policy targeting the pod specifically. But this seem to only work for pods not running with the host network.
I’ve also tried enabling the applyOnForward and preDNAT without luck. I didn’t however try to target the service cluster IP itself
This must be a common issue, for allowing the cluster dns to function?
Another thing I discovered is that even without the above config, I’m allowed to query the cluster IP of the dns pods from the host itself as well as a different subnet than listed above (via wireguard).
But pods with hostNetwork: true can’t resolve dns for pulling their image.
I’m very confused
hostNetwork: true tells Kubernetes “do not use Calico for this pod, treat it as part of the host itself” so Calico is not aware of host-networked pods and, in the networking sense, they’re not pods at all, they run as part of the host. You can make policy that matches traffic from them but you have to match traffic from that host.
Yeah that’s what I figured. But I do have “auto endpoints” to create HEPs for interfaceName: *.
I would have thought that I could then let the pod IPs of kube-dns accept ingress from HEPs as well, by using “GlobalNetworkPolicy” and specifically select the k8s-app == kube-dns.
And I’m still confused why I’m able to talk to the kube-dns directly from the host as well as from a subnet on a wireguard interface (not a felix managed wireguard overlay) even without the above configuration, but the hostNetwork pod can’t (while resolving the docker image hostname).
I seem unable to block access to the kube-dns clusterIP to the subnet on the wireguard interface no matter what I do.
This type of config does however work as expected for a different service that is in the default namespace. Perhaps the cluster dns is handled in a special way.
The other similar config that does work looks like this:
oh ok. But it does work for accessing my own dns service on default namespace. While denying access to other services not listed in the policy as expected, except kube-dns.
I did make it block the kube dns for the wireguard subnet by specifically adding a deny like this:
but if I try to delete the destination to become default deny while having applyOnForward. It will also deny the first Allow and basically make everything inaccessible for the wireguard subnet.
And finally an off topic question,
is there a recommended way to calicoctl apply these policies that also removes ones that no longer exist in the provided yaml? kind of like kubectl apply --prune
While activating dual stack going from IPv4 to having both, I noticed calico doesn’t manage the firewall rules on the IPv6 side. There are no calico* tables when listing with ip6tables.
Even though IPv6 is activated and assigning IPs to pods and HEPs have IPv6 addresses on them.
calicoctl get felixConfiguration default shows that ipv6Support: true.
ipv6 pools configured in calico are used and IP’s are allocated to the pods. it’s just the firewall that doesn’t get any ipv6 updates