What’s in OpenShift 4?
Check out some of the latest features that you will find in OpenShift 4.
Join the DZone community and get the full member experience.Join For Free
OpenShift, arguably the most popular Kubernetes distribution for hybrid cloud, recently has got its fourth major release! The latest release is the result of RedHat’s (now IBM) acquisition of CoreOS and is a merge of two leading Kubernetes distributions, Tectonic and OpenShift. Both platforms had their advantages, large open source communities, and solid arguments in cloud-native space.
- CoreOS Tectonic: operator framework, quay.io container build, and registry service, stable tiny Linux distribution with ignition bootstrap and transaction-based update engine.
- OpenShift: wide enterprise adoption, security, and multi-tenancy features.
What Do We Get as An Outcome of Such a Merge?
The short answer is, OpenShift 4 is built on top of Kubernetes 1.13 and comes with three main features:
- Self-Managing Platform
- Application Lifecycle Management
- Automated Infrastructure Management
However, the devil is in the details, so let’s have a closer look!
The new openshift-install tool, together with operators, is the replacement for the old Ansible scripts and is the first significant difference comparing to OpenShift v3 you notice.
Install experience is straightforward; the whole process can be done in one command and requires minimal infrastructure knowledge since the tool uses a “success first, tweak later” principle.
It took me 40 minutes from starting to being ready to use the platform. The new installer implements cluster API and “static pods” concepts, which assumes using Kubernetes API for Kubernetes lifecycle management e.g., bootstrap, upgrade and configuration management. The whole process may sound weird; however, such an architecture gives you a possibility to work with Kubernetes cluster rollouts similarly to any cloud-native application, use the same tooling, API, and expertise. Additionally, the logic used by the installer reused as a part of an automated upgrade which is vital for avoiding configuration drifts in the future.
The installer has two modes, installer-provisioned and user provisioned infrastructure. The first one is recommended since it assumes end to end automated cluster management, the other infrastructure maintenance by a third party.
Under the hood, at a high level, the install process looks like this:
- The bootstrap node starts and hosts resources needed by the control plane
- Control plane nodes start an etcd cluster
- The bootstrap node starts a temporary control plane which uses an etcd cluster and schedules the permanent control plane
- Bootstrap node hands over to the newly created control plane and shut down
- The permanent control plane creates the remaining resources
After the installer finishes, the future platform maintenance is handled by self-hosted operators; this is where self-management comes.
In OpenShift 4, “operators concept” goes to the next level and forms the core of the platform. The hierarchy of operators, with clusterversion at the top, is the single door for configuration changes and is responsible for reconciling the system to the desired state. For example, if you break a critical cluster resource directly, the system automatically recovers itself. Also, OpenShift control plane and OS are tightly linked and holistically managed, which allows transaction-based maintenance flows, like upgrades and automatic certificates rotation.
Similarly to cluster maintenance, operator framework used for applications. As a user, you get SDK, OLM (Lifecycle Manager) and embedded operator hub.
At the Node Level
RHEL CoreOS is the result of merging CoreOS Container Linux and RedHat Atomic host functionality and is currently the only supported OS to host OpenShift 4.
Some notes about the OS below:
- Node provisioning with ignition, which came with CoreOS Container Linux
- Atomic host updates with rpm-ostree
- CRI-O as a container runtime
- SELinux enabled by default
Kubernetes is often considered a cloud-agnostic layer since it contains tons of abstraction mechanisms covering many aspects of *aaS solutions. OpenShift 4 introduces a set of new machine* resources provided by machine API (implementation of the upstream Cluster API).
This allows for creating, scaling and maintaining cloud VM instances using Kubernetes objects, simplifying writing custom controllers for scaling and provisioning of the cluster.
OpenShift 4 has impressed me with a mix of self-maintenance features for on-prem- and cloud IaaS-based rollouts; it’s the mature solution and excellent building block for hybrid cloud infrastructure. However, nowadays most organizations use more than one cloud and Kubernetes cluster simultaneously, the disappointment was to see the lack of such multi-cluster support like centralized identity, RBAC, monitoring, and federation. Red Hat looks in that direction, too, for example, their new cluster manager gives you a holistic list of clusters across your organization, and I dare to predict that we should see more and more of those in the next releases or as a part of OpenShift or as a separate product.
Published at DZone with permission of Oleksii Dzhulai, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.