Hello! I’m deploying Calico as the CNI for a K3s deployment. I’m having trouble getting my pods to resolve DNS and I believe it’s because they cannot reach the kube-dns service IP. If I disable all GlobalNetworkPolicies, DNS works again. Here are the global policies I have in effect:
apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
creationTimestamp: "2022-03-04T22:31:10Z"
name: allow-ping
resourceVersion: "5508678"
uid: 600035a3-087e-4bef-8afc-798d7aa02995
spec:
ingress:
- action: Allow
destination: {}
icmp:
type: 8
protocol: ICMP
source: {}
- action: Allow
destination: {}
icmp:
type: 128
protocol: ICMPv6
source: {}
selector: all()
types:
- Ingress
- apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
creationTimestamp: "2022-03-04T22:30:41Z"
name: default-deny
resourceVersion: "5512831"
uid: dcc78cb7-3c54-48ff-b44d-d5eb0638b878
spec:
egress:
- action: Allow
destination:
ports:
- 53
selector: k8s-app == "kube-dns"
protocol: UDP
source: {}
- action: Allow
destination:
ports:
- 53
selector: k8s-app == "kube-dns"
protocol: TCP
source: {}
namespaceSelector: has(kubernetes.io/metadata.name) && kubernetes.io/metadata.name
not in {"kube-system", "calico-system"}
types:
- Ingress
- Egress
kind: GlobalNetworkPolicyList
metadata:
resourceVersion: "5517828"
Now, when I set up this cluster, I used the Calico operator and here is my installation configuration:
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operator.tigera.io/v1","kind":"Installation","metadata":{"annotations":{},"name":"default"},"spec":{"calicoNetwork":{"containerIPForwarding":"Enabled","ipPools":[{"blockSize":26,"cidr":"192.168.100.0/22","encapsulation":"IPIP","natOutgoing":"Enabled","nodeSelector":"all()"}],"mtu":1400,"nodeAddressAutodetectionV4":{"firstFound":false,"interface":"wg-.*"}}}}
creationTimestamp: "2022-01-07T19:49:06Z"
generation: 8
name: default
resourceVersion: "3612203"
uid: c6ac59b5-d75c-4087-85fa-434b25cd06a3
spec:
calicoNetwork:
bgp: Enabled
containerIPForwarding: Enabled
hostPorts: Enabled
ipPools:
- blockSize: 26
cidr: 192.168.100.0/22
encapsulation: IPIP
natOutgoing: Enabled
nodeSelector: all()
linuxDataplane: Iptables
mtu: 1400
multiInterfaceMode: None
nodeAddressAutodetectionV4:
firstFound: false
interface: wg-.*
cni:
ipam:
type: Calico
type: Calico
flexVolumePath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
nodeUpdateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
variant: Calico
status:
computed:
calicoNetwork:
bgp: Enabled
containerIPForwarding: Enabled
hostPorts: Enabled
ipPools:
- blockSize: 26
cidr: 192.168.100.0/22
encapsulation: IPIP
natOutgoing: Enabled
nodeSelector: all()
linuxDataplane: Iptables
mtu: 1400
multiInterfaceMode: None
nodeAddressAutodetectionV4:
firstFound: false
interface: wg-.*
cni:
ipam:
type: Calico
type: Calico
flexVolumePath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
nodeUpdateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
variant: Calico
mtu: 1400
variant: Calico
Notice I set my IPPool CIDR to 192.168.100.0/22.
Now, I noticed that I never actually overrode K3s’ default cluster-cidr
and service-cidr
values, which are 10.42.0.0/16 and 10.43.0.0/16 respectively. The former is supposed to influence what IPs pods are assigned, the latter is reserved for service IPs. Yet, I can launch pods, they get IPs on the Calico IPPool. However it does seem like this is not properly configured.
I have been trying to find documentation as to how the service-cidr
in particular should relate to Calico’s IPPools. Should I set the IPPool to be a superset of both the cluster CIDR and service CIDR? Should I have two IPPools, one for cluster and one for service? Should they have different configuration?
Thank you very much in advance for the clarity!