Dual Stack Documentation Confusion

I’m prepping our dev cluster for Ipv4/6 dual stack and I’m following these instructions:

https://docs.projectcalico.org/networking/dual-stack

I’m hoping someone here has some experience with it and can clear up the confusion

Specifically in step 2 I’m instructed to set CALICO_IPV6POOL_CIDR to

the same as the IPv6 range you configured as the cluster CIDR to kube-controller-manager and kube-proxy

I wouldn’t normally configure this CIDR for ipv4, since I use kubeadm and the configuration is picked up automatically. Do I need to add it for v6 or will it also get picked up automatically?

ref (emphasis added): Install Calico networking and network policy for on-premises deployments | Calico Documentation

If you are using pod CIDR 192.168.0.0/16 , skip to the next step. If you are using a different pod CIDR with kubeadm, no changes are required - Calico will automatically detect the CIDR based on the running configuration. For other platforms, make sure you uncomment the CALICO_IPV4POOL_CIDR variable in the manifest and set it to the same value as your chosen pod CIDR.

Then, for the BGP configuration: BGP configuration | Calico Documentation

The serviceClusterIPs field is defined as

A list of valid IPv4 CIDR blocks.

But since I’m using dual stack, services should be able to get an ipv6 address if their ipFamily field is set to IPv6. How do I put an IPV6 cidr? Do I actually need this option if I only want service clusterIP addresses to be accessible inside the cluster or is this for getting external traffic to those IPs?

Based on

I think the documentation for BGPConfiguration may need updating

I believe we do autodetect both IPv4 and IPv6 CIDRs from the kubeadm config.

Do I actually need this option if I only want service clusterIP addresses to be accessible inside the cluster

No, that field is only needed if you want to advertise service IPs to BGP peers outside the cluster.

Great, thanks a lot for your reply

Followup question: there are a few mentions in the docs about not needing any encapsulation or BGP if your nodes are all on the same L2 network, but there’s no information on how to configure that. Our nodes are on the same L2 network so I’d like to do that since it sounds like the simplest and lowest overhead method. Do you know where I can look for a configuration example for that?

You do need BGP in that scenario but you can turn off encapsulation by setting IPIP to never on the IP pool resource: https://docs.projectcalico.org/reference/resources/ippool

OK maybe it’s a little ambiguous, one example is here:

the title is " Unencapsulated, not peered with physical infrastructure" but this would still be using BGP to peer between cluster nodes, just not with routers correct? (my confusion was over the term physical infrastructure)

Yes, that would still mean using BGP to peer between cluster nodes, just not with routers. At the risk of introducing even more confusion, you can also run VXLAN with cross-subnet mode, which doesn’t use BGP, and will give the same overall result if running in a single L2 network/subnet.

The “determine best networking option” doc is high up the list of docs we want to improve. It’s a bit confusing and hard to follow at the moment. I think @hogepodge mentioned he plans to try to improve it sometime in the next few weeks.

Ok great thanks for your reply, I have no problem using bgp between the nodes and I can’t use vxlan because I’m going dual stack (docs say vxlan doesn’t support ipv6), so that seems like my best option