[Solved] issue with kubernetes network policy and calico CNI - ingress namespaceSelector not working

All,

I have this situation where a certain K8S network policy is not working for me:

  1. No policy → connection across nodes is working
  2. When I set only port ingress filter → connection across nodes is working
  3. However, when adding a namespaceSelector to the same policy-> only same node traffic is working (e.g. node2->node2, but not node2->node3)

I can see that the connection remains in “SYN_SENT” state.
I can see that iptables is populated, but couldn’t tell if there are issues.

I have two scenarios with pod to pod traffic:
a) Prometheus
b) Kong API gateway
both exhibit the same behavior where same node targets are working, but different node targets timeout.

I am using the default install for a small on-prem cluster as advised by Kubeadm documentation.
images: calico/kube-controllers:v3.16.1 and calico/node:v3.16.1

Is this a known issue? I couldn’t find anything related.

Thanks,
Timo

Configuration is default:
calico_backend: bird
cni_network_config: |-
{
“name”: “k8s-pod-network”,
“cniVersion”: “0.3.1”,
“plugins”: [
{
“type”: “calico”,
“log_level”: “info”,
“datastore_type”: “kubernetes”,
“nodename”: “KUBERNETES_NODE_NAME”,
“mtu”: CNI_MTU,
“ipam”: {
“type”: “calico-ipam”
},
“policy”: {
“type”: “k8s”
},
“kubernetes”: {
“kubeconfig”: “KUBECONFIG_FILEPATH
}
},
{
“type”: “portmap”,
“snat”: true,
“capabilities”: {“portMappings”: true}
},
{
“type”: “bandwidth”,
“capabilities”: {“bandwidth”: true}
}
]
}
typha_service_name: none
veth_mtu: “1440”

Hi Timo,

I’m afraid I don’t have a full answer for you yet, but 3 things spring to mind.

  1. Can you tell us more about how your nodes are connected, and if you’re using an overlay (vxlan/ip-in-ip) or not?

  2. Can you share an example of NetworkPolicy that stops the inter-node communication when you add a namespaceSelector?

  3. Have you tried running ‘sudo watch iptables-save -c’ on the source and destination nodes, while trying to establish an inter-node connection? If it’s iptables dropping the packet, an increasing counter should indicate which rule is responsible for that.

Hi Neil,

thanks for giving me the right hints. After looking at the diffs between packet counts and before/after applying the namespaceSelector, it turns out that the sourceIP ipset rule is not firing.
Not sure why this is the case, but after disabling firewalld (I am using Centos7 with ip-in-ip), it started working.
I tried adding rules to the firewall to allow IP tunneling, but that doesn’t seem to be the issue. Here’s what I ended up with:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens192
  sources:
  services: dhcpv6-client ssh
  ports: 10250/tcp 30000-32767/tcp 6783/tcp 6783/udp 6784/udp 10249/tcp 9100/tcp 10255/tcp 6783-6784/udp 8472/udp 179/tcp
  protocols:
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
        rule protocol value="4" accept

The network policy I am using is quite simple:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-public-api
  namespace: helloworld-netcore-master
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          nbly-role: ingress-kong
    ports:
	- port: "8080"
	  protocol: TCP
  podSelector: {}
  policyTypes:
  - Ingress

Modifying the firewall rules as outlined above did not work unfortunately.
Suggestions how to configure firewalld properly for calico ip-in-ip are greatly appreciated.

Thanks,
Timo

Hi Timo,

It’s good that it works after disabling firewalld, but I also don’t yet understand in detail why that would be. If firewalld was the problem, it should have affected the inter-node communication before you added the namespaceSelector to your NetworkPolicy, as well as after.

Anyway, focussing first on what you need to allow through firewalld, please see:

Regarding the namespaceSelector: without that being specified, the allowed from peers are all pods in the helloworld-netcore-master namespace. With the namespaceSelector as above, the allowed peers are all pods in namespaces with the nbly-role: ingress-kong label. Does that help at all, in your setup?

Finally you mentioned “the sourceIP ipset rule is not firing” - I’m not sure I understand; can you show me more precisely what you mean?

Best wishes,
Neil

Hi Neil,

yes, I have added all the ports mentioned in the documentation. However, I am unclear about this specific requirement for ip-in-ip:
"IP-in-IP, often represented by its protocol number 4"
I tried adding a firewalld rule for this, but not sure if that was correct (see above).

After some more digging I suspected that there was some source NAT going on. Since we are using service type clusterIP, it must have been the firewalld. I removed the masquerade=yes and rebooted the machines. Voila, it is working now.

Thanks for getting me on the right track!

Ah, good news, thanks for reporting back here.

The IP-in-IP thing is that it’s not a port number like with TCP-based or UDP-based protocols; it’s one higher-level than that. The iptables match would be --protocol 4. Don’t know if there’s a firewalld equivalent, but probably there is.