Mirantis Enables OpenStack on Kubernetes
Mirantis made some waves when it announced that it was enabling OpenStack on Kubernetes. See what the impacts will be and if the noise lives up to the hype.
Join the DZone community and get the full member experience.Join For Free
Mirantis' announcement on enabling OpenStack on Kuberenetes created a lot of buzz, with opinions ranging from controversial to capitulation. We had an opportunity to chat with Boris Renski, co-founder/CMO of Mirantis. Here is our take.
In a typical OpenStack deployment, OpenStack services (such as Compute, Storage, Networking, Identity, etc.) are installed on bare metal servers. OpenStack software also supports installing these services on virtual machines or containers. Mirantis OpenStack already supports deployment of OpenStack services on containers. OpenStack software also supports deploying applications on containers (on bare-metal or virtual machines) and popular container orchestration engines such as Kubernetes and Docker Swarm. In these setups, OpenStack services are managed/orchestrated internally by OpenStack; applications running on containers and the container infrastructure are managed/orchestrated by the container orchestration engine. OpenStack APIs would provide a unified interface to all underlying infrastructure — bare-metal, virtual machine, or containers.
So what is really new?
With this refactoring for Mirantis OpenStack's next release, OpenStack services are not only deployed in containers (or simply 'containerized'), but also orchestrated by Kubernetes, which also orchestrates containerized applications that end user consumers. In this setup, Kubernetes orchestration engine will be the common fabric that orchestrates both OpenStack services and containerized end user applications. In essence, OpenStack services become yet another containerized applications that Kubernetes orchestrates.
Currently, Mirantis OpenStack follows a bi-annual release model in alignment with OpenStack's release cycle. From its next release, Mirantis OpenStack will follow a continuous delivery model, with upgrades being delivered as containerized services.
Why Does it Matter?
In this Kubernetes orchestrated containerized setup, OpenStack services will be deployed in a way consistent with the deployment model that the Google Container Engine (GKE) employs internally. This consistency, in turns, enables deploying applications on both on-premise infrastructure and Google Container Engine, thereby enabling a new hybrid model.
This containerized deployment model will also improve the upgradeability of OpenStack services. Although OpenStack deployment experience has improved a lot with almost every commercial OpenStack distribution providing some type of installer, day-to-day operations, such as upgrades, are still complex. With this containerized deployment model, version upgrades are pushed as upgraded containers which users can spin right away, thereby improving upgradeability.
Who Should Care?
Consumers who have a mix of virtual-machine-based and containerized applications will benefit the most. They can continue using OpenStack infrastructure for running VM-based applications and leverage Kubernetes' managed container cluster for running containerized applications. They can also utilize the same container cluster to run the OpenStack services themselves
While this approach of 'OpenStack as an App' is not new (for example, Alex Polvi of CoreOS talked about it during OpenStack Austin Summit), the partnership with Google and Intel is of importance here. Through this partnership, Google has a path from on-premise OpenStack infrastructure to GKE, strengthening its hybrid story. This also enables hybrid capability between OpenStack and a non-OpenStack public cloud service provider.
Magnum takes a different approach to enable Containers in an OpenStack environment. It leverages a Container Orchestration Engine to manage and orchestrate containers underneath and exposes container management capabilities through OpenStack APIs. This capability enables end users to use a common API framework to manage bare-metal, virtual-machine- and container-based infrastructure. The approach of 'OpenStack an App on a Container Orchestration Engine' is the inverse of the approach Magnum takes.
TripleO (OpenStack On OpenStack) enables deploying OpenStack services through a smaller OpenStack cloud. Red Hat is a big proponent of this project, with the Red Hat OpenStack Platform Director (the installer for Red Hat OpenStack) based on this. This containerized deployment of OpenStack services is in contrast with TripleO based approach.
While the intent is clear with this announcement, implementation details are being fleshed out. For example, how would the continuous delivery be implemented? Where would the code enabling this delivery model go — under OpenStack umbrella or Kubernetes or both? We expect to hear more details on this at upcoming OpenStack Silicon Valley event.
Impact on Usability
With this approach, users need to use OpenStack for managing virtual machines and Kubernetes for managing containers. We also expect OpenStack services continue to be used for managing software defined infrastructure components. This results in using multiple platforms versus single platform. This approach also diminishes the advantage that OpenStack APIs provide by minimizing their exposure.
Because this approach also standardizes based on Kubernetes, the choice of container orchestration engine is also limited.
I see this as an important step in enabling more hybrid cloud deployment capabilities between OpenStack and a major public cloud service provider (GKE). However, it is neither a capitulation nor a controversy.
Opinions expressed by DZone contributors are their own.