Pod-to-Pod communication not working in Calico

Hello,

I am having kubernetes cluster running on on-premise environment on Redhat 8 VMs.

I have installed Calico as CNI plugin for my cluster. All my calico pods are running fine and BGP is also established.

However whenever I deploy any application, the application’s pod are not communicating with each other on the same node as well as on different nodes.

Also currently I am not sure which firewall Calico is using in the backend to manage the networking rules. I am having below in my calico.yaml

  90s]'
            type: string
          iptablesBackend:
            description: IptablesBackend specifies which backend of iptables will
              be used. The default is legacy.
            type: string
          iptablesFilterAllowAction:
            type: string
          iptablesLockFilePath:
            description: 'IptablesLockFilePath is the location of the iptables
              lock file. You may need to change this if the lock file is not in
              its standard location (for example if you have mapped it into Felix''s
              container at a different path). [Default: /run/xtables.lock]'

I found one article, where it mentions that we need to manually inform Calico by setting the IptablesBackend parameter to “nf-tables” or “legacy” for iptables. I was trying to set this up, but its giving syntax errors as below.

error: error validating “calico.yaml”: error validating data: ValidationError(CustomResourceDefinition.spec.versions[0].schema.openAPIV3Schema.properties.spec.properties.iptablesBackend): unknown field “iptablesBackend” in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaProps; if you choose to ignore these errors, turn validation off with --validate=false

Could you please help?

Thanks in advance.

Hello @ravindras85, that’s a great question! I would like to suggest directing this to Calico users’ Slack, here. There is a big community in our Calico Slack and everyone is great at collaborating! Please let me know if anything goes wrong with the link.