How Kubernetes Changed Container Management
Improved the ability to scale quickly with a vendor agnostic architecture.
Join the DZone community and get the full member experience.Join For Free
To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "How has K8s changed the orchestration and management of containers?" Here’s what we learned:
- K8s fundamentally changed from Docker which put in a nice box. K8s took that and made it scale for N instances. It provided an infrastructure that allowed it to be more stateless and self-healing. Self-healing is one of the most important aspects of K8s. Scalable, flexible, and self-healing.
- It’s very early days. I’ve witnessed the transition where containers were approached as a lightweight virtualization technology to pack more workload for my machines since containers share the same OS kernel to achieve more density and save money using hardware better. They evolved quickly with an explosion of Docker to support for CI/CD. One of the big drivers was the ability to package a Docker image in a consistent format that would run anywhere. Develop, test, production, and deploy containers to run in different environments in a consistent and reproducible way. From 2014 to 2015 we started seeing all use cases popping up. After that, we saw the adoption of containers to build apps based on microservices. You need a container as a lightweight virtual machine, you need the packaging of a container image to make everything producible, and you need something that can run schedule, and scale up and down — an operating system for containers. We Docker Swarm versus Mesos versus K8s and K8s became the de facto standard. This caused enterprises to begin adopting K8s much more confidently. In the last 12 months, we've been seeing many deployments in production at scale with thousands, tens, and hundreds of thousands of containers implementing microservices.
- K8s enable you to scale a specific service that’s broken into a microservice. K8s provides your ability to specify the number of instances and K8s will make that happen and restart as needed.
- K8s brought Google’s 15 years of knowledge to do edge processing at speed. We see the adoption of cloud environments that support this. K8s enable teams to get real scale production workloads and fault tolerance that would not be possible otherwise.
- The more impactful way that K8s is changing the landscape is by enabling platform teams to scale themselves more effectively. Rather than convincing every team to secure VMs properly, manage network devices – and then following up to make sure they’ve done so – platform teams can now hide all of these details behind a K8s abstraction. This lets both application and platform teams move more quickly: application teams because they don’t need to know all the details, and platform teams because they are free to change them.
- Compared to other container orchestration tools, Kubernetes is faster to scale and deploy, more reliable and offers more options.
- K8s has emerged as the container orchestration platform of choice — according to our recent Currents survey, more developers use K8s (42 percent) as their preferred container orchestration platform, compared to Docker (35 percent). The fact that the industry has settled on standards for implementing orchestration takes the guesswork out of adopting a solution and has significantly accelerated the development of systems that are stable and interoperable. K8s makes running containerized apps consumable for any developer, regardless of their skills or resources. Automating management of Kubernetes clusters and the provisioning of nodes to make it faster and easier to run containerized apps. It also infuses guided UI experiences and open APIs to provide the right level of support throughout the developer’s journey.
- We tried to use Docker Swarm. K8s brought a full-featured set of accessible requirements with namespaces, quick launch tools, and blue/green deployments which made a vendor-agnostic standard. Open source is available on every cloud.
- K8s provides a very good first user experience — the moment of “wow.” Very good experience for the first time. A huge community has formed around K8s advancing projects and building content and meetups. Google invested a lot of money in the community. Launching a new open source project is similar to launching a new product and co-create the CNCF a neutral environment where everyone has a voice.
- The primary framework (i.e. orchestration manager) and de facto standard that companies use to orchestrate and deploy container-based applications are K8s. Other orchestration managers are Amazon ECS, Apache Mesos, and Docker Swarm. The common tools used to deploy containers depend on the degree of complexity and tasks to automate during container deployment. For basic deployments, YAML file processing can be used. For more flexibility and managing order of operations during deployment, Helm charts and Ansible playbooks are used. For day two operational tasks that require more complex logic, users can create a Golang Operator to automate the running of these tasks, such as backup and recovery, autoscaling pods, conducting a rolling upgrade, and other common operational tasks. Containers provide an abstraction layer that allows application developers to write their application once and be able to deploy them across various environments. Application developers no longer need to program custom methods to ensure applications will restart on failure. K8s orchestration now fully manages this capability. When using a K8s container orchestration application management framework, application developers need to consider how they plan to distribute their applications (i.e., publish to common K8s marketplace service catalogs), implement security, consider whether either ephemeral or persistent storage is right for their specific application, and if the application will be deployed across different geographies.
- K8s does a lot of the heavy lifting managing the end state of what you want the system to be. K8s keeps replicas running, restarting processes automatically. A single configuration can be propagated across all clusters. You do not have to create a specific configuration. Now you get a standardized, single platform to use in a uniform way across the enterprise. This standardizes and normalizes usage, you can package applications and dependencies in a single container and tell K8s to run it.
- For customers, K8s is the default way to orchestrate containers taking the concepts popularized by Docker and by making them more enterprise-class. The next step is the need to orchestrate K8s environments so you’re orchestrating the orchestrator. K8s will orchestrate the specific deployment on a specific site and then across multiple sites and on-prem as well as orchestrating the application lifecycle. It’s helpful to have a common view of the deployment environment with a platform across all of the deployment sites.
- Two years ago Docker with Swarm and Mesos with Meriton and K8s has changed the landscape. It has delivered a lot of stability that's usable in production for a large number of customers. Putting application developers first has led to success. Developers encouraged the growth of the community around K8s promoted by the could native foundation.
- K8s managed to change the nature of how to build complex environments while maintaining high availability and ability to integrate sensible rule constructs into K8s itself. There is a substantially advanced product set versus five years ago. You cannot manage containers manually. K8s is able to keep up with scale while making it easy to do what you want to do. K8s works well with everything except K8s. You have to make sure your orchestrator is still there.
- K8s are so transformative, it totally changed how people manage containers. We declare how we imagine how our infrastructure to look in a golden state and K8s creates it. Self-healing and responsive. It gives infrastructure a great deal of resilience. It moved the focus from achieving a certain state to now work at a higher level of abstraction. Now, there is a community and framework to solve problems that people have in common.
- Quite dramatically. K8s orchestrates and manages containers and allows you to do K8s native applications. Extensibility is part of the power and magic of K8s. It does a good job architecturally around primitives and composites. K8s is all about resources, a pod is a resource, a consistent volume is a resource. You can combine multiple primitives into a single application and define it simply with one file. K8s allows you to customize behavior with the ability and ease to customize and extend. You can embed K8s as part of any application.
- Prior to K8s, orchestration systems tended to be piecemeal or focused on provider use cases only. K8s is the first system that supports orchestrating from small environments to massive environments in scale without any significant differences in implementation. By providing a unified set of APIs and concepts, K8s also makes enterprise multi-cloud strategies a reality.
- K8s makes it much simpler for enterprises to develop container-based applications. Because K8s features ubiquitous platform and cloud provider support, organizations can proceed in adopting container technology with confidence, knowing that the ecosystem is strong and that successful implementations are very well precedented. That said, enterprises’ wide embrace of K8s has also invited attackers to test the security of the now countless enterprise targets available, necessitating increased developer focus on K8s and container security.
- K8s provides a completely uniform way of describing containers and all the other resources needed to run them: databases, networks, storage, configuration, secrets, and even those that are custom-defined. That uniformity makes it easier than ever for a single developer to configure their ENTIRE stack (which they need to do to avoid bottlenecks that come from siloed management of those resources). Furthermore, K8s bakes in infrastructure-as-code into its API and tooling. All you need to have reliable versioning, peer-review, and roll-back is to put the very same YAML files you hand to the K8s API into source-control. Those two things, combined with the pluggability that’s so pervasive throughout the K8s architecture (Admission Controllers, in particular), provide revolutionary control and visibility into the current state of your K8s cluster, which leads to improved operations, security, and compliance.
- Containerization of applications is a relatively recent industry trend and its initial popularity was driven by Docker. Since K8s, its adoption has picked up significantly simply because it has more to offer — an ecosystem of tools, an Operators framework to automate CI/CD, easy integration with existing cloud services via Service Catalog and Service Brokers, a stronger emphasis on declarative pattern over imperative that allows easy management of application state by K8s, and several other features which makes it a holistic container platform not just an orchestrator. This has attracted the attention of large businesses who now believe K8s can truly solve the bulk of their infrastructure and operational challenges and allow them to focus on their core business.
- K8s became the enabler and containers are now mainstream. It’s not just a change to a specific way of running and deploying software but also to an architectural modernization. Organizations were previously cautious because of a lack of tooling. Now that tooling is accessible to them, every enterprise in the world can now leverage a new, easy-to-use self-service ecosystem.
Here’s who shared their insights:
- Dipti Borkar, V.P. Product Management, and Marketing, Alluxio
- Matthew Barlocker, Founder and CEO, Blue Matador
- Carmine Rimi, Product Manager Kubernetes, Kubeflow, Canonical
- Phil Dougherty, Sr. Product Manager, DigitalOcean
- Tobi Knaup, Co-founder and CTO, D2iQ
- Tamas Cser, Founder and CEO, Functionize
- Kaushik Mysur, Director of Product Management, Instaclustr
- Niraj Tolia, CEO, Kasten
- Marco Palladino, CTO and Co-founder, Kong
- Daniel Spoonhower, Co-founder and CTO, LightStep
- Matt Creager, Co-founder, Manifold
- Ingo Fuchs, Chief Technologist, Cloud and DevOps, NetApp
- Glen Kosaka, VP of Product Management, NeuVector
- Joe Leslie, Senior Product Manager, NuoDB
- Tyler Duzan, Product Manager, Percona
- Kamesh Pemmaraju, Head of Product Marketing, Platform9
- Anurag Goel, Founder and CEO, Render
- Dave McAlister, Community Manager and Evangelist, Scalyr
- Idit Levine, Founder and CEO, Solo.io
- Edmond Cullen, Practice Principal Architect, SPR
- Tim Hinrichs, Co-founder and CTO, Styra
- Loris Degioanni, Founder and CTO, Sysdig
Kubernetes Docker (software) application
Opinions expressed by DZone contributors are their own.
Mastering Go-Templates in Ansible With Jinja2
Step Into Serverless Computing
Turbocharge Ab Initio ETL Pipelines: Simple Tweaks for Maximum Performance Boost
Constructing Real-Time Analytics: Fundamental Components and Architectural Framework — Part 2