Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes on hetzner with crio flannel and hetzner balancer #1043

Open
wants to merge 30 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,301 @@
---
SPDX-License-Identifier: MIT
path: "/tutorials/kubernetes-on-hetzner-with-crio-flannel-and-hetzner-balancer"
slug: "kubernetes-on-hetzner-with-crio-flannel-and-hetzner-balancer"
date: "2025-01-27"
title: "Installing a Kubernetes cluster on Hetzner Cloud using Flannel, Ingress Nginx controller and the Hetzner Cloud Load Balancer"
short_description: "This guide contains instructions for installing a Kubernetes cluster using kubeadm and Helm"
tags: ["Kubernetes", "kubeadm", "Helm", "flannel", "CRI-O", "Nginx Ingress Controller", "Hetzner Balancer"]
author: "Dmitry Tiunov"
author_link: "https://github.com/DTiunov"
author_img: "https://avatars3.githubusercontent.com/u/....."
author_description: "DevOps engineer for PeakVisor and Zentist projects"
language: "en"
available_languages: ["en", "ru"]
header_img: "header-x"
cta: "product"
---

## Introduction

The tutorial contains instructions for installing a Kubernetes cluster version 1.30 using [kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm) and [Helm](https://helm.sh). All cluster nodes will be located within a single Hetzner Network. Requests from the Internet to the cluster will be made through the Hetzner Load Balancer. This configuration allows you to close cluster nodes from external connections with a firewall. It makes your cluster more secure.
By following this tutorial, you will install [Flannel](https://github.com/flannel-io/flannel) as CNI (Container Network Interface) and [CRI-O](https://cri-o.io) as CRI (Container Runtime Interface).

CRI-O was chosen due to the following advantages:
* CRI-O is more secure than containerd due to its limited set of features and internal components, which reduces the attack surface
* CRI-O has a clear component structure that is published
* CRI-O has an active community

By installing the components step by step, you will assemble the cluster as a constructor and learn how to manage the Hetzner Load Balancer using the annotations specified in the values ​​file for the Ingress Controller.

**Prerequisites**

* 3 virtual machines in the Hetzner Cloud (1 for Control Plane, 2 for Workers)
* With the Ubuntu 24.04 operating system
* Connected to the same Hetzner Network
* With root access
* [Helm](https://helm.sh/docs/intro/install) installed on one of the Control Plane virtual machines
* [Hetzner Cloud API token](https://docs.hetzner.com/cloud/api/getting-started/generating-api-token) in the [Hetzner Cloud Console](https://console.hetzner.cloud)
* TLS/SSL certificate imported into the [Hetzner Cloud Console](https://console.hetzner.cloud):
```
https://console.hetzner.cloud/projects/<project_id>/security/certificates
```

## Step 1 - Configure repositories

In this step we will add the Kubernetes and CRI-O repositories on all cluster nodes.

Setting the Kubernetes version as an environment variable

```bash
KUBERNETES_VERSION=v1.30
```

Adding Kubernetes key and repository
```bash
curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list
```

Adding CRI-O key and repository
```bash
curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/prerelease:/main/deb/ /" | tee /etc/apt/sources.list.d/cri-o.list
```

## Step 2 - Install packages

The next step is to install the CNI packages and Kubernetes components on all nodes in the cluster.

Updating list of available packages
```bash
apt update
```

Installing packages
```bash
apt install -y cri-o kubelet kubeadm kubectl
crio --version && kubelet --version && kubeadm version && kubectl version --client
```

Disabling automatic updates for installed packages
```bash
apt-mark hold kubelet kubeadm kubectl
```

## Step 3 - Operation System configuration

Kubernetes requires some preliminary operating system settings. If you are using the Ubuntu 24.04 image provided by Hetzner, then you do not need to disable SWAP, just run the commands below on all nodes in the cluster.

Enabling [br_netfilter](https://ebtables.netfilter.org/documentation/bridge-nf.html) kernel module
```bash
echo "br_netfilter" >> /etc/modules-load.d/modules.conf
```

Enabling IP forward. It allows packets to be routed between different networks, enabling communication between subnets or acting as a gateway
```bash
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
```

Rebooting node to apply changes
```bash
reboot
```

## Step 4 - Initializate cluster

Cluster initialization is performed on the future Control Plane node. In our case, only 1 Control Plane node is assumed. To ensure fault tolerance in a production environment, you must use at least 3.

Getting the default init configuration file
```bash
kubeadm config print init-defaults > InitConfiguration.yaml
```

Change or add the following values inside the file:

* `bootstrapTokens.token: abcdef.0123456789abcdef` - you can get bootstrap token by command `kubeadm token generate`
* `localAPIEndpoint.advertiseAddress: 10.0.0.1` - this configuration object lets you customize what IP/DNS name and port the local API server advertises it's accessible on. By default, kubeadm tries to auto-detect the IP of the default interface and use that, but in case that process fails you may set the desired value here. Set the IP address of your Control Plane node in the Hetzner Network here
* `nodeRegistration.criSocket: unix:///var/run/crio/crio.sock` - path to CRI-O socket file
* `nodeRegistration.kubeletExtraArgs.cloud-provider: external` - set to `external` for running with an external cloud provider
* `nodeRegistration.kubeletExtraArgs.node-ip: 10.0.0.1` - IP address (or comma-separated dual-stack IP addresses) of the node. Set the IP address of your Control Plane node in the Hetzner Network here
* `nodeRegistration.name: your_host` - the name of the Control Plane node
* `apiServer.certSANs: ['your_host.example.com', '10.0.0.1']` - Subject Alternative Names for API certificate. It can be both, FQDN and local IP address of your Control Pane node
* `networking.podSubnet: 10.244.0.0/16` - the subnet used by Pods

Initializing the cluster
```bash
kubeadm init --config InitConfiguration.yaml
```

## Step 5 - Join nodes into the cluster

Now we can join nodes into the cluster. Run the commands below on each worker node.

Getting default join configuration file
```bash
kubeadm config print join-defaults > JoinConfiguration.yaml
```

Change or add the following values inside the file:

* `discovery.bootstrapToken.apiServerEndpoint: your_host.example.com:6443` - set your Control Plane node FQDN as the API endpoint
* `discovery.bootstrapToken.token: abcdef.0123456789abcdef` - the same like in [InitConfiguration.yaml](#step-4---initializate-cluster) file
* `discovery.tlsBootstrapToken: abcdef.0123456789abcdef` - the same like in [InitConfiguration.yaml](#step-4---initializate-cluster) file
* `nodeRegistration.criSocket: unix:///var/run/crio/crio.sock` - path to CRI-O socket file
* `nodeRegistration.kubeletExtraArgs.cloud-provider: external` - Set to `external` for running with an external cloud provider
* `nodeRegistration.kubeletExtraArgs.node-ip: 10.0.0.2` - IP address (or comma-separated dual-stack IP addresses) of the node. Set the IP address of your Worker node in the Hetzner Network
* `nodeRegistration.name: your_host` - the name of the Worker node

Joining the node to the cluster
```bash
kubeadm join --config JoinConfiguration.yaml
```

Assign the worker role to worker nodes by adding the corresponding label. Run these commands on the Control Plane node

> In the commands below, replace `k8s-test-worker-1` and `k8s-test-worker-2` with the names of your worker nodes.

```bash
mkdir -p $HOME/.kube
cp /etc/kubernetes/admin.conf $HOME/.kube/config
kubectl get nodes
kubectl label node k8s-test-worker-1 node-role.kubernetes.io/worker=worker
kubectl label node k8s-test-worker-2 node-role.kubernetes.io/worker=worker
```

## Step 6 - Install CNI plugin

This guide uses [Flannel](https://github.com/flannel-io/flannel) as the CNI. If you want to use network policies to provide a higher level of security, you should consider other plugins.

Creating a namespace for Flannel
```bash
kubectl create ns kube-flannel
```

And adding privileges there. The `privileged` policy has no restrictions. All Pods in the kube-flannel namespace will be able to bypass typical container isolation mechanisms. For example, they will be able to access the node's network
```bash
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged
```

Adding helm repository
```bash
helm repo add flannel https://flannel-io.github.io/flannel/
```

You can get the `values.yaml` file from the [kube-flannel repository](https://github.com/flannel-io/flannel/blob/master/chart/kube-flannel/values.yaml) and set the value of `podCidr` to the same value as specified for `networking.podSubnet` in the [InitConfiguration.yaml](#step-4---initializate-cluster) file.

Installing Flannel into the previously created namespace
```bash
helm install flannel flannel/flannel --values values.yaml -n kube-flannel
```

Components that specify `cloud-provider` to `external` will add a taint `node.cloudprovider.kubernetes.io/uninitialized` with an effect `NoSchedule` during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that a cloud controller manager is not available, new nodes in the cluster will be left unschedulable. That why we have to path CoreDNS after flannel installation to evade problems with CoreDNS Pods initialization
```bash
kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
```

## Step 7 - Install Hetzner Cloud Controller

We need to install the Hetzner Cloud Controller manager to integrate our Kubernetes cluster with the Hetzner Cloud API. This will allow us to use the functions provided by Hetzner. We will be mainly interested in Managed Load Balancer.

Creating a Kubernetes secret resource contains your Hetzner Cloud API token (see the Prerequisites in the [Introduction](#introduction)) and Hetzner Network ID. Hetzner Network ID can be extracted from the last digits of your network's URL in the Hetzner Cloud Console. For example, for the URL https://console.hetzner.cloud/projects/98071/networks/2024666/resources it will be 2024666
```bash
kubectl -n kube-system create secret generic hcloud --from-literal=token=<HETZNER_API_TOKEN> --from-literal=network=<HETZNER_NETWORK_ID>
```

Adding and updating helm repository
```bash
helm repo add hcloud https://charts.hetzner.cloud
helm repo update hcloud
```

Install Hetzner Cloud Controller
```bash
helm install hccm hcloud/hcloud-cloud-controller-manager -n kube-system
```

## Step 8 - Install Ingress Controller

In this tutorial, we will use [Nginx Ingress Controller](https://github.com/kubernetes/ingress-nginx). TLS encryption will be done on the Hetzner Load Balancer, which will be automatically added and configured during the installation of the Ingress Controller.

Adding and updating helm repository
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
```

The `values.yaml` file can be downloaded from the official [Ingress Nginx repository](https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml). Change the following values inside the file:

* `controller.dnsPolicy: ClusterFirstWithHostNet`
* `controller.hostNetwork: true`- by default, while using `hostNetwork`, name resolution uses the host's DNS. If you wish Ingress Controller to keep resolving names inside the Kubernetes network, use `ClusterFirstWithHostNet`
* `controller.kind: DaemonSet` - install Ingress Controller as a DaemonSet
* `controller.service.annotations` - these annotations define the settings for the Hetzner Load Balancer, which will be created by the Hetzner Сloud Сontroller automatically after the Ingress Controller is installed. The list and description of available annotations can be found in the official [Hetzner Cloud Controller repository]( https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/main/internal/annotation/load_balancer.go)
* `load-balancer.hetzner.cloud/name: "k8s-test-lb"` - this is the name of the Load Balancer. The name will be visible in the Hetzner Cloud Console
* `load-balancer.hetzner.cloud/location: "fsn1"` - specifies the location where the Load Balancer will be created in
* `load-balancer.hetzner.cloud/type: "lb11"` - specifies the type of the Load Balancer
* `load-balancer.hetzner.cloud/ipv6-disabled: "true"` - disables the use of IPv6 for the Load Balancer
* `load-balancer.hetzner.cloud/network-zone: "your_network_name"` - specifies the Hetzner Network name where the Load Balancer will be created in. It must match the Hetzner Network of the Kubernetes nodes
* `load-balancer.hetzner.cloud/use-private-ip: "true"` - configures the Load Balancer to use the private IP for Load Balancer server targets. This is necessary so that traffic from the Hetzner Balancer to the cluster nodes goes inside the Hetzner Network
* `load-balancer.hetzner.cloud/protocol: "https"` - specifies the protocol of the service
* `load-balancer.hetzner.cloud/http-certificates: "certificatename"` - a comma separated list of IDs or Names of Certificates (see the Prerequisites in the [Introduction](#introduction))
* `load-balancer.hetzner.cloud/http-redirect-http: "true"` - create a redirect from HTTP to HTTPS
* `controller.service.enableHttp: false` - enable the HTTP listener on both controller services or not
* `controller.service.targetPorts.https: http` - port of the Ingress Controller the external HTTP listener is mapped to. Since we have configured http to https redirect on the Hetzner Balancer and assigned it as responsible for TLS encryption, there is no need to use encryption between the Hetzner balancer and the cluster nodes. Accordingly, the port should be specified as http (80)


Installing Ingress Controller
```bash
helm install ingress-nginx-controller ingress-nginx/ingress-nginx -f values.yaml -n kube-system
```

## Step 9 - Install CSI driver (Optional)

At the time of writing, Hetzner Cloud only has `ReadWriteOnce` volumes available via Container Storage Interface, but it can still be useful, so I recommend installing the CSI driver.

Adding and updating helm repository
```bash
helm repo add hcloud https://charts.hetzner.cloud
helm repo update hcloud
```

Installing Hetzner Cloud CSI driver
```bash
helm install hcloud-csi hcloud/hcloud-csi -n kube-system
```

## Conclusion

As a result, you have a cluster consisting of 1 Control Plane node and 2 Worker nodes. You also have the Load Balancer operating over HTTPs.

Now you can block all Internet traffic with firewall rules on all cluster nodes and create the remaining resources necessary to run your own application on the cluster.

##### License: MIT

<!--

Contributor's Certificate of Origin

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I have
the right to submit it under the license indicated in the file; or

(b) The contribution is based upon previous work that, to the best of my
knowledge, is covered under an appropriate license and I have the
right under that license to submit that work with modifications,
whether created in whole or in part by me, under the same license
(unless I am permitted to submit under a different license), as
indicated in the file; or

(c) The contribution was provided directly to me by some other person
who certified (a), (b) or (c) and I have not modified it.

(d) I understand and agree that this project and the contribution are
public and that a record of the contribution (including all personal
information I submit with it, including my sign-off) is maintained
indefinitely and may be redistributed consistent with this project
or the license(s) involved.

Signed-off-by: [Dmitry Tiunov [email protected]]

-->
Loading