Dual Stack
Pre-release
Pre-release
How to create a K8s Dual Stack cluster with KIND
For the Impatient
- Download the KIND binary corresponding to your OS
wget https://github.com/aojea/kind/releases/download/dualstack/kind.linux
mv kind.linux kind
chmod +x kind
- Create your KIND configuration and store it in a file, i.e.
dual-stack.yaml
:
# a dual-stack cluster
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true
ipFamily: DualStack
nodes:
- role: control-plane
- role: worker
- role: worker
- Create your cluster (KIND default CNI plugin does not support dual stack):
./kind create cluster --config dual-stack.yaml
- Install a CNI with dual stack support, :
kubectl apply -f https://raw.githubusercontent.com/aojea/kindnet/master/install-kindnet.yaml
For the Curious
- Create a KIND binary with dual-stack support:
# clone the fork
git clone [email protected]:aojea/kind.git
# cd into the repository
cd kind
# switch to the dualstack branch
git checkout dualstack
# and build it
go build
- Create your node image with dual stack support (https://kind.sigs.k8s.io/docs/user/quick-start/#building-images)
./kind build node-image --image kindest/kindnode:dualstack
- Create the cluster using the node-image that you just built with the following configuration:
# a dual-stack cluster
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
ipFamily: DualStack
nodes:
- role: control-plane
- role: worker
- role: worker
./kind create cluster --image kindest/kindnode:dualstack --config config-kind.yaml -v3
Check your cluster is really working in Dual Stack
- Check that pods have both IPv4 and IPv6 addresses:
kubectl run -i --tty busybox --image=busybox -- ip a
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: unable to upgrade connection: container busybox not found in pod busybox-86c4cfd46-hbd44_default
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
link/ether 5e:20:cb:df:76:0e brd ff:ff:ff:ff:ff:ff
inet 10.244.2.3/24 brd 10.244.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd00:10:244:0:2::3/80 scope global
valid_lft forever preferred_lft forever
inet6 fe80::5c20:cbff:fedf:760e/64 scope link
valid_lft forever preferred_lft forever
- If everything went well, you can create a deployment:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
- We configured the cluster to use IPv4 as the default family, so we can create an IPv4 service with
kubectl expose deployment.apps/nginx-deployment
- We can create an IPv6 service using the following yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
run: nginx
spec:
ipFamily: IPv6
ports:
- port: 80
protocol: TCP
selector:
run: nginx
- Check services are created:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 74m
nginx ClusterIP fd00:10:96::3cd7 <none> 80/TCP 7s
nginx-deployment ClusterIP 10.107.244.181 <none> 80/TCP 27m