Newb-Q Can't ping pod network from master (IaaS k8s 1.20.4 in Azure on ubuntu 18.04 w/ calico)

New to k8s and calico and following what might be an outdated tutorial online. Something tells me I didn’t set up calico properly or I have an NSG blocking traffic somewhere.

2 node cluster (1 master, 1 worker)

My 2 VM’s are in Azure. 10.0.2.4 (master) and 10.0.2.5 (worker). I ssh to the master to admin the cluster (i.e. kubectl is installed on the master). I did NOT make any /etc/hosts changes on either VM. VM’s can ping each other with short name (albeit adds some funky azure DNS suffixes).

Vanilla ubuntu 18.04 VMs. Installed (and enabled) docker.io, and kubeadm.

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
wget http(colon slash slash) docs(dot)projectcalico(dot)org(slash)manifests(slash)calico(dot)yaml ( this forum prevents http links for new users :/)
kubectl apply -f calico.yaml (I didn’t modify calico.yaml at all…was I supposed to? CIDR range matches but commented out?)

(Am I supposed to do something else here with calicoctl or the tigera-operator?? the tutorial I’m following makes no mention of it…although seeing some conflicting instructions on projectcalico.org)

From here, I joined my worker node to the cluster and everything seems to work as expected, until I created a pod. See below ping perm denied errors and an ifconfig. Any thoughts?

parker@c1-master1:~$ kubectl run hello-world-pod --image=gcr.io/google-samples/hello-app:1.0
pod/hello-world-pod created
parker@c1-master1:~$ kubectl get pod -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP                NODE       NOMINATED NODE   READINESS GATES
hello-world-pod   1/1     Running   0          5s    192.168.222.203   c1-node1   <none>           <none>


parker@c1-master1:~$ ping -v 192.168.222.203
ping: socket: Permission denied, attempting raw socket...
ping: socket: Permission denied, attempting raw socket...
PING 192.168.222.203 (192.168.222.203) 56(84) bytes of data.
^C
--- 192.168.222.203 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4085ms



parker@c1-master1:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       hello-world-pod                            1/1     Running   0          70s
kube-system   calico-kube-controllers-6949477b58-9cdt7   1/1     Running   6          7d22h
kube-system   calico-node-4vlz7                          1/1     Running   6          7d22h
kube-system   calico-node-5xk29                          1/1     Running   9          7d22h
kube-system   calico-node-z54zg                          1/1     Running   1          7d21h
kube-system   coredns-74ff55c5b-4zj45                    1/1     Running   6          7d22h
kube-system   coredns-74ff55c5b-l575k                    1/1     Running   6          7d22h
kube-system   etcd-c1-master1                            1/1     Running   6          7d22h
kube-system   kube-apiserver-c1-master1                  1/1     Running   10         7d22h
kube-system   kube-controller-manager-c1-master1         1/1     Running   7          7d22h
kube-system   kube-proxy-hrcz5                           1/1     Running   5          7d22h
kube-system   kube-proxy-kd649                           1/1     Running   6          7d22h
kube-system   kube-proxy-rht2g                           1/1     Running   1          7d21h
kube-system   kube-scheduler-c1-master1                  1/1     Running   7          7d22h


parker@c1-master1:~$ ifconfig -a
cali49232bbf622: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1480
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 1300  bytes 124647 (124.6 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1371  bytes 135235 (135.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

calibb472e2b57c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1480
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 1298  bytes 124459 (124.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1365  bytes 136936 (136.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

calie45dc70da56: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1480
        inet6 fe80::ecee:eeff:feee:eeee  prefixlen 64  scopeid 0x20<link>
        ether ee:ee:ee:ee:ee:ee  txqueuelen 0  (Ethernet)
        RX packets 764  bytes 67266 (67.2 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 867  bytes 460084 (460.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:5d:bb:dd:d4  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.4  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::20d:3aff:fef5:fc4c  prefixlen 64  scopeid 0x20<link>
        ether 00:0d:3a:f5:fc:4c  txqueuelen 1000  (Ethernet)
        RX packets 10987  bytes 4080400 (4.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10920  bytes 3990126 (3.9 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 235161  bytes 46922484 (46.9 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 235161  bytes 46922484 (46.9 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1480
        inet 192.168.19.64  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24  bytes 2016 (2.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

parker@c1-master1:~$

Well, I got this working with weave-net, following these instructions: https://medium.com/@patnaikshekhar/creating-a-kubernetes-cluster-in-azure-using-kubeadm-96e7c1ede4a

So I know this is very likely a calico setup issue.

The calico.yaml at the URL you gave is configured to use IP in IP encapsulation which is blocked by Azure networking. You need to use https://docs.projectcalico.org/manifests/calico-vxlan.yaml which will use VXLAN encapsulation, which does work on Azure.

Thanks for the response. I’ll give it a whirl shortly and let you know how it goes.

no luck with calico-vxlan.yaml.

$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico-vxlan.yaml
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       hello-world-pod                            1/1     Running   0          5m53s
kube-system   calico-kube-controllers-69496d8b75-8rkrf   1/1     Running   0          8m4s
kube-system   calico-node-4k2fx                          0/1     Running   0          8m4s
kube-system   calico-node-nlm72                          0/1     Running   0          8m4s
kube-system   coredns-74ff55c5b-r4gbv                    1/1     Running   0          10m
kube-system   coredns-74ff55c5b-wnhk9                    1/1     Running   0          10m
kube-system   etcd-c1-master1                            1/1     Running   0          10m
kube-system   kube-apiserver-c1-master1                  1/1     Running   0          10m
kube-system   kube-controller-manager-c1-master1         1/1     Running   1          10m
kube-system   kube-proxy-n7g92                           1/1     Running   0          9m53s
kube-system   kube-proxy-xqldl                           1/1     Running   0          10m
kube-system   kube-scheduler-c1-master1                  1/1     Running   1          10m
$ kubectl get no
NAME         STATUS   ROLES                  AGE     VERSION
c1-master1   Ready    control-plane,master   10m     v1.20.4
c1-node1     Ready    <none>                 9m59s   v1.20.4
$ kubectl run hello-world-pod --image=gcr.io/google-samples/hello-app:1.0
pod/hello-world-pod created
$ kubectl get po -o wide
NAME              READY   STATUS    RESTARTS   AGE   IP                NODE       NOMINATED NODE   READINESS GATES
hello-world-pod   1/1     Running   0          12s   192.168.222.193   c1-node1   <none>           <none>
$ ping 192.168.222.193
PING 192.168.222.193 (192.168.222.193) 56(84) bytes of data.
^C
--- 192.168.222.193 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3049ms

This doesn’t look right - can you kubectl describe this pod and see why it isn’t Ready please?