Cannot access to Pods from the Master

Hello,
I am new with kubernetes, and I set up my environment this way: 1 master and 2 nodes.

Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 28 Oct 2020 21:57:24 +0000   Wed, 28 Oct 2020 21:57:24 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Thu, 29 Oct 2020 09:58:20 +0000   Mon, 19 Oct 2020 18:26:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Thu, 29 Oct 2020 09:58:20 +0000   Mon, 19 Oct 2020 18:26:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Thu, 29 Oct 2020 09:58:20 +0000   Mon, 19 Oct 2020 18:26:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Thu, 29 Oct 2020 09:58:20 +0000   Mon, 19 Oct 2020 18:56:29 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.0.200
  Hostname:    k8sn01
Capacity:
  cpu:                2
  ephemeral-storage:  19475088Ki
  hugepages-2Mi:      0
  memory:             4030576Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  17948241072
  hugepages-2Mi:      0
  memory:             3928176Ki
  pods:               110
System Info:
  Machine ID:                 1ab07247d00b473f9625c5ab810e540b
  System UUID:                1ab07247-d00b-473f-9625-c5ab810e540b
  Boot ID:                    9781375f-502c-4190-89a9-b21ce0e0c6c0
  Kernel Version:             5.4.0-52-generic
  OS Image:                   Ubuntu 20.04.1 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.13
  Kubelet Version:            v1.19.3
  Kube-Proxy Version:         v1.19.3
PodCIDR:                      192.168.0.0/24

Here are my pods

k8sn01:~$ kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
nginx-app   1/1     Running   0          11h   192.168.1.7   k8sn02   <none>           <none>
pingtest    1/1     Running   0          11h   192.168.2.7   k8sn03   <none>           <none>

I can ping 192.168.1.7 from k8sn03, and 192.168.2.7 from k8sn02. But, I cannot ping these pods from the Master (k8sn01).

I noticed, that there is no echo reply from the Pods to the Master,

k8sn01:~$ kubectl gk8sn01:~$ sudo tcpdump -nn icmp
[sudo] password for student: 
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
09:40:37.884814 IP 192.168.0.0 > 192.168.2.7: ICMP echo request, id 16, seq 50, length 64
09:40:38.908823 IP 192.168.0.0 > 192.168.2.7: ICMP echo request, id 16, seq 51, length 64
09:40:39.932835 IP 192.168.0.0 > 192.168.2.7: ICMP echo request, id 16, seq 52, length 64
09:40:40.956843 IP 192.168.0.0 > 192.168.2.7: ICMP echo request, id 16, seq 53, length 64
09:40:41.980805 IP 192.168.0.0 > 192.168.2.7: ICMP echo request, id 16, seq 54, length 64

When I ping 192.168.2.7 from k8sn02, I do have an echo reply.

k8sn02:~$ sudo tcpdump -nn icmp 
[sudo] password for student: 
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
09:43:25.787558 IP 192.168.1.0 > 192.168.2.7: ICMP echo request, id 9, seq 58, length 64
09:43:25.788134 IP 192.168.2.7 > 192.168.1.0: ICMP echo reply, id 9, seq 58, length 64
09:43:26.811509 IP 192.168.1.0 > 192.168.2.7: ICMP echo request, id 9, seq 59, length 64
09:43:26.812000 IP 192.168.2.7 > 192.168.1.0: ICMP echo reply, id 9, seq 59, length 64
09:43:27.835543 IP 192.168.1.0 > 192.168.2.7: ICMP echo request, id 9, seq 60, length 64
09:43:27.835979 IP 192.168.2.7 > 192.168.1.0: ICMP echo reply, id 9, seq 60, length 64

Is this an expected behavior, and why?
Thank you in advance for yoru answer.

Is Calico running on the master? If not, it won’t be part of the Calico network.

This is what I did:

k8sn01:~$ curl https://docs.projectcalico.org/manifests/canal.yaml -O

k8sn01:~$ kubectl apply -f canal.yaml

Also, I can notice that there is no issue with the routes, the master can see the path to the nodes, but cannot see the pods.

k8sn01:~$ ip route
default via 192.168.0.1 dev ens18 proto dhcp src 192.168.0.200 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.0.0/24 dev ens18 proto kernel scope link src 192.168.0.200 
192.168.0.1 dev ens18 proto dhcp scope link src 192.168.0.200 metric 100 
192.168.0.2 dev cali02e04fd6db8 scope link 
192.168.1.0/24 via 192.168.1.0 dev flannel.1 onlink 
192.168.2.0/24 via 192.168.2.0 dev flannel.1 onlink 

k8sn02:~$ ip route
default via 192.168.0.1 dev ens18 proto dhcp src 192.168.0.201 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.0.0/24 dev ens18 proto kernel scope link src 192.168.0.201 
192.168.0.1 dev ens18 proto dhcp scope link src 192.168.0.201 metric 100 
192.168.1.3 dev cali731b4106e23 scope link 
192.168.1.7 dev cali63d4a091157 scope link 
192.168.2.0/24 via 192.168.2.0 dev flannel.1 onlink 

k8sn03:~$ ip route
default via 192.168.0.1 dev ens18 proto dhcp src 192.168.0.202 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.0.0/24 dev ens18 proto kernel scope link src 192.168.0.202 
192.168.0.1 dev ens18 proto dhcp scope link src 192.168.0.202 metric 100 
192.168.1.0/24 via 192.168.1.0 dev flannel.1 onlink 
192.168.2.4 dev cali5727df77a53 scope link 
192.168.2.7 dev cali6854d2a2ae4 scope link

it looks like this route is missing from both nodes

192.168.0.0/24 via 192.168.0.0 dev flannel.1 onlink

I tried to run:

k8sn02:~$ sudo ip route add 192.168.0.0/24 via 192.168.0.0 dev flannel.1
Error: Nexthop has invalid gateway.

You host network clashes with the pod network. You need to pick a different pod-cidr. (host-cidr, pod-cidr and service-cidr must not overlap)

Thanks. I believe that I have to redeploy everything.