The process of getting the hosts ready will clearly differ depending on whether you’re actually on bare metal servers, or if you’re running on some hosted servers (e.g. VPS, EC2 instances, etc.)
Docker is known to have several issues with different Linux kernel versions. One thing we learnt (the hard way) is that
(and expect a production-grade system).
We would recommend you stay with one of the supported OS versions that Kubespray supports.
Namely: Ubuntu, CoreOS or Centos. We chose Ubuntu, due to previous familiarity with the distro.
Clearly, the first step after you wire up the servers on your rack is to install the operating system. In our case this was Ubuntu 16.04, so I wrote down the steps, in case you have any doubts.
In the case of AWS as our cloud provider, we still want to stay as vanilla as possible, to avoid vendor lock-in with AWS. As such, EC2 (ok, and VPC) is all we’ll be using out of Amazon’s services, to ensure we can easily transfer somewhere else.
Now, you could just go on the AWS web console, start a bunch of ubuntu 16.04
instances,
tag/name them appropriately and give them to Ansible (the next step), but… why would you?
We have Terraform
for that. In fact, the
Kubespray repo even has a terraform
example for AWS ready for you!
BUT… it uses the AWS ELB and NAT gateway. Deal breakers for us here, so what are we to do but roll our own ?
If you are lucky enough to be running on AWS, you have Amazon’s elastic load balancer to bring internet traffic into your k8s cluster. If not (or if you don’t want to be vendor-locked to that), you need to roll your own.
+ We hereby provide an HA solution that is based on the Kubernetes Service LoadBalancer.
The Kubespray AWS Terraform example deploys all of the Kubernetes cluster on a private subnet, meaning the EC2 instances don’t have public IPs. Amazon’s NAT gateway offers a way to NAT your traffic to a set of Elastic IPs, allowing - amongst other things - your peers to whitelist just your IPs.
+ For Bare Metal scenarios, we will provide a different solution to satisfy this constraint, as we can’t afford to have just any random set of IPs as the source of our traffic.
Both these will be discussed in the next section.
We also have our own example that allows you to set up your K8s cluster even within an existing
VPC, and this does NOT use an AWS ELB or NAT gateway. Please take a look at the terraform
directory.
Now that you have some hosts ready, you can proceed with the rest of the requirements before installing your Kubernetes cluster.