← All Articles

Announcing Kubernetes High-Availability for Maestro

Vic van GoolVic van Gool
Apr 17th 19Updated Jun 17th 21

announcing-kubernetes-high-availability-for-maestro

Introduction

Today we're excited to announce brand new support for High Availability (HA) Kubernetes! Here at Cloud 66, we've been running Kubernetes both for ourselves, and for our customers (check out Maestro) for quite some time now. Kubernetes is a really great piece of software! You tell it what you want, and it tries its best to make that happen. Sure, the process of telling it what you want can be daunting (and has a steep learning curve) - but the payoff is great! One thing that's been missing for our customers for a while though is the inclusion of High Availability (HA) on the Master Nodes of their clusters - happily, this is now available on your clusters running Kubes v1.13.x and higher).

If you're just getting started with Kubernetes, fear not, you can read our handy help page on Kubernetes cluster nodes.

One thing that has matured in the last few months is the ability to configure your cluster with High Availability (HA). This essentially means being able to create multiple Master Nodes such that your cluster can be tolerant to a single Master Node going down (this doesn't apply to Worker Nodes as you can have many of those already!). We've been watching this space for a long time, and feel that it is finally ready for the prime time, and consequently, HA is now available on your clusters (running Kubes v1.13.x and higher)!

The implementation isn't completely straightforward and does rely on some tricky logic pieces. You can read more about that below, but the good news is that we've don't all this for you already; scaling up additional Master Nodes will just work!

Implementation

Under the hood, we primarily use kubeam to provision Kubernetes clusters. This makes sense for us we already control the server provisioning (regardless of your Cloud provider). Some really great resources for cluster provisioning are here in the Kubernetes docs:
https://kubernetes.io/docs/setup/independent/install-kubeadm/

With regards to HA, we've followed the Kubeadm HA guide. However, there are a couple of places we have diverged, I'll go through those next.

Masters Communication (without a Load Balancer)

When you have a single Master Node (Master) in your cluster, everyone knows who to talk to when they need cluster information. Worker Nodes (Worker) talk to the Master on its available address, and Administrators perform cluster configuration via that same address. However, if you have multiple Masters (to provide fault-tolerance within your cluster) then you have a collection of endpoints that Workers (from within) and Administrators (from without) can possibly talk to.

The canonical solution for this is to create a load balancer infrastructure component (whether Cloud native or HAProxy based as per the docs) that accepts requests and forwards those to one of the Masters. Internally it performs health checks to ensure that it is forwarding requests to healthy Masters.

This provided a challenge for us in that we did not want to have the inclusion of an additional infrastructure component as a prerequisite for HA. As such we came up with the solution of running a local HAProxy instance on each Node in the cluster. HAProxy is very lightweight, and running like a reverse proxy in this instance imposes a negligible overhead on server resources.

So each Node's cluster address is essentially a local loopback to this HAProxy instance which in-turn forwards requests to an available Master. For Masters themselves, we amend the HAProxy instance to point to itself only (after provisioning of the specific Master is complete). This maintains a single cluster endpoint and also avoids local Master requests bouncing to other Masters.

Masters Communication (without Regenerating Certificates)

One of the problems we've faced in the past with our implementation is that as the Master Node address is essentially an IP address, the TLS certificate that Kubernetes uses to secure its communications is signed up-front with this IP address - which causes problems if the IP Address changes.

The process of regenerating the Kubernetes certificates is still a bit of dark magic. Generate certs here, copy certs there, restart these services, kill those pods.. etc. I'd love for Kubernetes to have an embedded mechanism to make this easier, however, by design, the solution fails due to the fact that as soon as the certificate is no longer valid (ie. the IP address has changed) then Kubernetes can NO LONGER communicate!

We wanted a solution that would mean that Kubernetes certificate regeneration was kept to a minimum (only being applied at the point certificates expire for example). To do this we generate the certificate based on a set of known CNAMES, which in turn are locally controlled via etc/hosts on each Node. If an IP address changes, we simply update the hosts files on each Node and voilà - Kubernetes is back up and running everywhere with communications restored.

Conclusion

Setting up HA on your Kubernetes cluster is certainly the next logical step in bringing Kubernetes to the prime-time. That said, the process of configuring and maintaining the cluster still presents some challenges! I'm hopeful that advancements in the Kubernetes project will help to reduce some of those challenges in the future.

So if you just want a High Availability Kubernetes cluster that just works, come and check out Cloud 66 Maestro today!



Try Cloud 66 for Free, No credit card required