HA Kubernetes Setup With K3s and Wireguard - Part 2
We’ve been building our own VPN with Wireguard last time. This was one of the requirements we found to setup a highly available Kubernetes cluster with encrypted traffic between all our nodes. All that’s left now is actually buidling the cluster, so let’s get started.
Installing the first master node⌗
The first thing we’re going to do is to write a configuration file for the first master node. Every node will have its own configuration, so you can create the configuration on the server itself already. I recommend saving a copy of the config in your home directory; if you need to uninstall k3s at some point the provided uninstall scripts also delete this config from /etc
.
One last thing before we get started: please ensure that your DNS is setup correctly. Your domain name should point to all public IP addresses of your master nodes. Finally I recommend that you setup a subdomain for your cluster, eg.
kubernetes.example.com
(I’m going to use this as an example moving forward.)
Here we go, our first configuration file ~/config.yaml
:
tls-san:
- kubernetes.example.com
- <external-ip-master-node-1>
flannel-backend: wireguard-native
flannel-iface: wg0
node-ip: 10.222.0.1
node-external-ip: <external-ip-master-node-1>
advertise-address: 10.222.0.1
cluster-init: true
Let’s go through this step by step.
tls-san
adds additional hostnames to the self-signed TLS certificate the server node will issue on first boot. These entries will be added asSubject Alternative Names
. Since we’re going to access the Kubernetes API of the cluster from the subdomain we configured it makes sense to include this here. Finally adding the external IP of the node is required so that we can join the other nodes to the cluster via accessing the external IP.flannel-backend
issues the server to use the Wireguard native kernel module for the Flannel CNI that k3s uses as the default CNI. Here’s the README of the Flannel project if you wan’t to find out more about it.flannel-iface
tells k3s to use our Wireguard interface for the node.node-ip
control which internal IP the master node uses to advertise to other members of the clusternode-external-ip
tells other members of the cluster which external IP our master node is available onadvertise-address
is usually set by default if one ofnode-ip
ornode-external-ip
is set; we explicitely set this to the same value asnode-ip
.cluster-init
is the value that tells k3s that we’re planning on building a HA cluster. This initialized the embeddedetcd
database on the master node.
We still need to move the configuration to the correct folder for the installation script to pick it up during the install.
mkdir -p /etc/rancher/k3s && cp ~/config.yaml /etc/rancher/k3s/config.yaml
And finally - we install the binary and start our first cluster node:
curl -sfL https://get.k3s.io | sh -s - server
I deliberately chose to install k3s on all machines with an install script provided by the k3s authors. You should definitely review the script before executing it, especially on hardware that is running other business-critical software!
The command should complete successfully without any errors. Congratulations, the first node is alive!
If the command did not complete successfully then it’s a good idea to check the logs.
journalctl -xeu k3s
orsystemctl status k3s
should give you enough details to debug the problem.
Configuring kubectl on client machine⌗
Now would be a good time to configure your kubectl
on your client to access the API of your cluster. k3s creates a configuration you can use as a blueprint (it contains the client certificate that kubectl
needs to connect) - we only need to change the server host address.
cp /etc/rancher/k3s/k3s.yaml ~/k3s.yaml && sed -i s/127.0.0.1/<external-ip-node-1>/g ~/k3s.yaml
Copy ~/k3s.yaml
to your client machine as ~/.kube/config
. Let’s test it:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
<master-node-1> Ready control-plane,etcd,master 1m v1.25.6+k3s1
Success!
If you already have a
~/.kube/config
file on your client then it makes sense to merge the new k3s config into your existing one. This is one way of doing it:export KUBECONFIG=~/.kube/config:~/k3s.yaml kubectl config view --flatten > ~/config
Check if the content of
~/config
looks correct and feel free to replace~/.kube/config
with the newly created file.
Installing the remaining master nodes⌗
To join the other remaining master nodes to the cluster is basically done in the same way we setup our first master - you write a config file and run the installation script. Log into your second master node and open up ~/config.yaml
:
token: <master-node-1-token>
tls-san:
- kubernetes.example.com
- <external-ip-node-2>
server: https://<external-ip-node-1>:6443
flannel-backend: wireguard-native
flannel-iface: wg0
node-ip: 10.222.0.2
node-external-ip: <external-ip-node-2>
advertise-address: 10.222.0.2
Okay, so we have a couple of new options here! Let’s dive in and see what they are about:
token
is a value that k3s generated on the first master node. It’s a secret that k3s requires from all master / agent nodes that want to join the cluster. The value of the token is found in/var/lib/rancher/k3s/server/token
- you’ll need to copy that value and paste it into the config here.server
tells k3s where the master lives that we want to join.
Let’s copy the config to the correct directory and install k3s:
mkdir -p /etc/rancher/k3s && cp ~/config.yaml /etc/rancher/k3s/config.yaml
curl -sfL https://get.k3s.io | sh -s - server
After a short while the command will return successfully.
If not -
journalctl
is your friend.
If you go back to your client machine and run kubectl get nodes
you’ll notice that the output has changed:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
<master-node-1> Ready control-plane,etcd,master 10m v1.25.6+k3s1
<master-node-2> Ready control-plane,etcd,master 1m v1.25.6+k3s1
Finally you’ll want to login to your third master and open ~/config.yaml
:
token: <master-node-1-token>
tls-san:
- kubernetes.example.com
- <external-ip-node-3>
server: https://<external-ip-node-1>:6443
flannel-backend: wireguard-native
flannel-iface: wg0
node-ip: 10.222.0.3
node-external-ip: <external-ip-node-3>
advertise-address: 10.222.0.3
/var/lib/rancher/k3s/server/token
will be the same across all nodes; you can however decide to join the third master using the second master node. Changeserver
to target the second master node and give it a try!
Let’s install k3s and conclude our setup:
mkdir -p /etc/rancher/k3s && cp ~/config.yaml /etc/rancher/k3s/config.yaml
curl -sfL https://get.k3s.io | sh -s - server
We’re almost done - let’s run our last test to see if all nodes have been registered correctly:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
<master-node-1> Ready control-plane,etcd,master 15m v1.25.6+k3s1
<master-node-2> Ready control-plane,etcd,master 6m v1.25.6+k3s1
<master-node-3> Ready control-plane,etcd,master 1m v1.25.6+k3s1
Great work! Your cluster setup is now complete, and you got yourself a HA k3s installation. I recommend changing the the server address in your ~/.kube/config
to use the domain that points to all three of your nodes like so:
sed -i s/<external-ip-node-1>/kubernetes.example.com/g ~/.kube/config
Let’s check out the traefik
service in our cluster to see if all external IP’s have been registered correctly:
kubectl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kube-dns ClusterIP 10.43.X.X <none> 53/UDP,53/TCP,9153/TCP
metrics-server ClusterIP 10.43.X.X <none> 443/TCP
traefik LoadBalancer 10.43.X.X <external-ip-node-1>,<external-ip-node-2>,<external-ip-node-3> 80:<internal-port>/TCP,443:<internal-port>/TCP
If you don’t see all external IP’s coming up don’t worry - the service discovers these IP’s and adds them after it started up. This might take a minute or two, depending on the speed and load on the machines.
Summary⌗
We’ve achieved our goals and setup a k3s cluster that is fault-tolerant. Internal communication between nodes is piped through our Wireguard VPN, which adds an extra layer of security. In the next post we’re going to setup a demo application and take care of TLS certificates using Let’s Encrypt. Until then - take care!