Skip to main content

What components does an OpenShift installation consist of and what are the common scenarios for distributing these components in a highly available environment?
OpenShift has three main components: one or more masters, one or more Etcd instances and one or more nodes. The „several“ refers to an installation in a productive environment where a highly available solution is required. The master is the „brain“ and takes over all management tasks of the cluster. Etcd is a distributed key-value store that persists the cluster status and configuration (current nodes, secrets, image streams, etc.). The nodes do the actual work. The master decides what to do and which node gets which container.

All in one

In this installation, there is only one instance of each component and all instances are located on one physical host. This type is only suitable for test and demo purposes and not for a production environment. In previous posts we looked at two all-in-one installations (oc tool and minishift).

Single master, single etcd, multiple nodes

An all-in-one solution is intended for testing, demo and development purposes. Of course, you won’t get very far with just one node that actually does the work. OpenShift can manage 2,000 nodes per cluster ( read more about physical cluster limits here ). With this scenario, you can already handle large workloads, as there are several nodes. However, it is clear that if the master or the etcd instance fails, the entire cluster will no longer be operational. Although the individual containers continue to run on the nodes, it is no longer possible to deploy new containers or move containers if one of the nodes is overloaded.

Single master, multiple etcd, multiple nodes

The real problem in the OpenShift installation is etcd. If the individual instance is gone, the entire „status“ of the cluster is lost. It is not as simple as installing a new instance of etcd, because these instances are assigned an ID (again a composition of host-related data). A new host = new ID = unrecoverable cluster status.

Since etcd is important, several etcd instances should run on different nodes in an HA (High Availability) environment. Best practice is to have an odd number of etcd instances. This has to do with the RAFT protocol, which etcd uses to ensure consistency between the instances. With 3 instances, exactly one can fail and the cluster is still fully functional. In the process known as quorum, the cluster determines its „health“ and can therefore decide to keep the cluster alive. In the table below you can see how many instances can fail and how many of them can fail.

CLUSTER SIZE MAJORITY FAILURE TOLERANCE
1 1 0
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
7 4 3
8 5 3
9 5 4

Source: https://coreos.com/etcd/docs/latest/faq.html

Multiple masters, multiple etcd, multiple nodes

Above a certain workload (number of containers, number of management API requests), a single master starts to sweat. In addition to the high load, a failure of a single master also means that no new deployments, builds or anything else can be carried out in the cluster. The containers remain alive, but the cluster is no longer operable. To prevent this from happening, you can and should install the masters in a highly available design. It is perfectly valid to install the master instances on the same physical hosts as the etcd instances. But I have also seen installations where even these two components were physically separate. In order to distribute the load between the masters, you need an external load balancer such as nginx, haproxy, Netscaler or whatever already exists in the company. Here is an example scenario:

Master and nodes are independent of each other and only etcd requires a quorum in an HA setup scenario. It is also possible to deploy the masters in any number (the odd number does not matter here).

 

In one of the next posts I will specifically discuss a production-ready setup.