While trying to configure global and namespaced network policies for my first time, I struggle to completely understand how to specify my selectors.
The specific issues currently is with pods running
I’m trying to allow access to the cluster dns service in kube-system namespace.
# TODO: doesn't work?
# selector: k8s-app == 'kube-dns'
# Allow from host & pod networks
- action: Allow
This works fine. But trying to use a selector to specify that only the destination of kube-dns pods should be allowed fails miserably.
At first I thought it would even work with a namespaced network policy targeting the pod specifically. But this seem to only work for pods not running with the host network.
I’ve also tried enabling the applyOnForward and preDNAT without luck. I didn’t however try to target the service cluster IP itself
This must be a common issue, for allowing the cluster dns to function?
Another thing I discovered is that even without the above config, I’m allowed to query the cluster IP of the dns pods from the host itself as well as a different subnet than listed above (via wireguard).
But pods with hostNetwork: true can’t resolve dns for pulling their image.
I’m very confused
hostNetwork: true tells Kubernetes “do not use Calico for this pod, treat it as part of the host itself” so Calico is not aware of host-networked pods and, in the networking sense, they’re not pods at all, they run as part of the host. You can make policy that matches traffic from them but you have to match traffic from that host.
Hey, thanks for replying.
Yeah that’s what I figured. But I do have “auto endpoints” to create HEPs for
I would have thought that I could then let the pod IPs of kube-dns accept ingress from HEPs as well, by using “GlobalNetworkPolicy” and specifically select the
k8s-app == kube-dns.
And I’m still confused why I’m able to talk to the kube-dns directly from the host as well as from a subnet on a wireguard interface (not a felix managed wireguard overlay) even without the above configuration, but the hostNetwork pod can’t (while resolving the docker image hostname).
I seem unable to block access to the kube-dns clusterIP to the subnet on the wireguard interface no matter what I do.
This type of config does however work as expected for a different service that is in the default namespace. Perhaps the cluster dns is handled in a special way.
The other similar config that does work looks like this:
- action: Allow
- 10.1.2.3/16 # the wireguard subnet
selector: app == 'otherdns-resolver'
Ah, I think the problem is that you’re using wireguard and we have a known issue with host to remote pod connections.
Auto-heps do solve most of the issues here since they automatically contain the right IPs including all the tunnel IPs.
To explain the other issue you saw: host to local pod traffic is always allowed; this is for kubelet to access pods.
oh ok. But it does work for accessing my own dns service on default namespace. While denying access to other services not listed in the policy as expected, except kube-dns.
I did make it block the kube dns for the wireguard subnet by specifically adding a deny like this:
- action: Deny
- 10.1.2.3/16 # the wireguard subnet
- 10.96.0.10/32 # the kube-dns cluster IP
but if I try to delete the destination to become default deny while having
applyOnForward. It will also deny the first Allow and basically make everything inaccessible for the wireguard subnet.
And finally an off topic question,
is there a recommended way to
calicoctl apply these policies that also removes ones that no longer exist in the provided yaml? kind of like
kubectl apply --prune
If you delete the destination from that rule then it won’t be a “default deny”, it’ll be a “deny all”.
Policy is processed in this order:
- all doNotTrack policy
- all pre-DNAT policy
- – service NAT happens here –
- all normal policy
(policy of different types doesn’t interleave)
If the pre-DNAT policy drops the traffic then the decision cannot be reversed by the “normal” policy.
No, we don’t have
--prune, could be a neat feature
Ah, thanks for the clarification
Another related issue this this.
While activating dual stack going from IPv4 to having both, I noticed calico doesn’t manage the firewall rules on the IPv6 side. There are no calico* tables when listing with ip6tables.
Even though IPv6 is activated and assigning IPs to pods and HEPs have IPv6 addresses on them.
How do you activate this?
I did those steps and I have routable ipv6 between pods. It’s only firewalling that doesn’t seem to be applied.
Maybe I forgot some step
Do you have the FELIX_IPV6SUPPORT env var set to “true”? It sounds like IPv6 is disabled if you’re not seeing iptables rules.
calicoctl get felixConfiguration default shows that
ipv6 pools configured in calico are used and IP’s are allocated to the pods. it’s just the firewall that doesn’t get any ipv6 updates
Aha, there were also an environment variable on the daemonset which says false
Environment variables override the felixconfiguration resource. Please check if the FELIX_IPV6SUPPORT env var is set on the calico-node pods.
Good information thanks for sharing
Great Information! Will keep an eye on this thread!
Digital Marketing Agency Bali