How to Implement Kubernetes
Start with security, planning, skills, and data locality.
Join the DZone community and get the full member experience.Join For Free
To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "What are the most important elements of implementing K8s for orchestrating containers?" Here’s what we learned:
- Four things: 1) security, 2) you don’t have to go “all-in” on K8s, don’t use it for databases, 3) capacity planning CPU, 4) K8s structure will mimic team structure.
- Networking, storage, security and monitoring, and management capabilities are all essential elements for implementing Kubernetes container orchestration. Businesses stand to realize tremendous benefits due to the fast pace at which both Kubernetes and container ecosystems are currently advancing. However, this pace also increases the challenge of keeping up with new advancements and functionality that are critical to success – especially in the area of security.
- K8s is still new though it’s been around for five years. There’s lack of expertise and talent — that’s the number one challenge. What you want from an enterprise standpoint is a standardized, shared platform to use on multiple clouds and on-prem. Containers are portable, K8s has the standard open-source API you can build a platform that can run anywhere. Create a shared platform that can run anywhere. Challenge is having the right people with the skill and then day two operations. Once in production, you have to deal with day two operation like upgrades, a new version out every three months. How to keep patched, back up, disaster recovery, scale. It abstracts the infrastructure from the developers. A declarative approach. To define the end state of what you want K8s to do, you just declare that. Tell K8s what you want, and it makes it happen. If something fails it, it will automatically recreate it. The downside of that is if something goes wrong with the system, now you have to search through multiple levels of abstraction to figure out where the problem is. To have a successful implementation, you need a team that knows what’s happening across the landscape. If it fails, you need to know the nitty-gritty details of all of the services that are running. Troubleshooting, debugging, upgrading the cluster, SLA management, day-two operations are challenging today.
- Learning the technology, there’s a learning curve. Every type of developer probably knows but the data engineering side is quite new. First step making it easy for developers to understand what the pieces are and how to be used. Then important aspects of data locality within the K8s cluster. Making it stateful and stateless when needed. Important concepts to explain to end-users and how to fit with K8s.
- 1) Labels are your friends, label everything. They are the road map to be able to figure out where things are going. 2) Keep in mind you don’t know where anything is. Build your environment to be specific to a purpose, not to a location. In K8s it’s not as small as possible, it’s as small as necessary. Don’t over-engineer your environment to create a thousand tiny little things – deliver the information needed from each component.
- We are seeing more people adopt K8s — different types of deployments, different flavors, different approaches to use. Some customers use a “build your own” approach. We are seeing people using on-prem vendors offering pre-packaged K8s distributions (e.g., Mesosphere, Docker, VMware). A lot is available on public cloud vendors. We see people adopting a consulting-based approach. The exact mix of what you pick depends on what kind of apps you are running on K8s and what kind of users you are servicing, and how advanced you are with your K8s deployments. We see a lot of reliance on cloud and on-prem (Red Hat and IBM are the most prominent). We recommend making sure you understand where you are in your journey, and who your users are to figure out the right mix. Make sure when deploying these technologies you start with people. People need to work well together when services are split between teams in terms of technology, culture, and people in engineering and ops.
- Declarative APIs. The customer says here’s what I want and know it will happen. Applications will be better if they are stateless. Able to get its state from somewhere else like the database. Observability is a huge issue across a broad number of microservices.
- The overall strategy of automating testing is critical. We see clients trying to find the right way to test. There is a huge variety of techniques and approaches. What needs to be tested, how are you set up, what is your maturity, what is the right level of automation? Test the right things in the right way, what tests can be run in parallel, how to deal with data management, how to leverage orchestration capability. What are the right devices you want to include? It depends on the maturity of the team and the software. Integrations, what else is your testing touching on? What are dependencies? When environments cannot take the scale and you fail in your expectations of what’s possible.
- K8s alone won’t solve your problem. It’s not an enterprise-grade orchestration stack. You should have the same concerns for K8s as when you put software into production – security, monitoring and debugging. There are 500+ open source products for cloud-native networking. It's impossible to keep up with and maintain. K8s comes out with new releases all of the time.
- We have a consulting package where we do a lot of training around developing and managing K8s clusters. Look for micro-improvements along with the massive ecosystem with 500 different open source tools. Each is a new area of discovery for people getting into cloud-native computing. We help customers consume open-source with little to no friction with security updates.
- The most important elements of implementing K8s to orchestrate containers is its ability to declaratively define application policies that are enforced at application runtime to maintain the desired state (e.g. the number of application pods, their types, and attributes) to ensure critical applications always remain available. Most recently, auto-scaling pods have also become a very important element to ensure predefined applications SLAs are always met. As well, the ease of deploying containers is an important element. Companies require the ability to develop, test, and deploy container-based applications quickly and seamlessly using their CI/CD pipelines.
- 1) Have a plan first, driven by your goals for moving to K8s. Moving from monolithic apps to microservices running on Kubernetes has many benefits, but trying to solve every problem at the same time is a recipe for delayed migration, and frustration. Know what you’re trying to achieve (or better yet, the sequence of goals you’re trying to achieve) and design a plan to accomplish those. The roadmap is key. Think about how you stage the adoption of K8s and the migration from monolith to microservice and how that will get rolled out across the organization. There’s a tremendous amount of new technology in the cloud-native ecosystem; fold that technology into the roadmap, too. Realize that the roadmap can and will change as you gain experience with each piece of that new technology stack. 2) Don’t forget that a new implementation doesn’t eliminate the need to address all the old requirements around Operations, Security, and Compliance. Factors to consider: What kind of app are you creating? Internal, or external? Will it have customer data? How often will it be updated? Questions to answer: who has access, and how will you enforce that access? Kubernetes to the rescue: Kubernetes provides a revolutionary way of implementing custom guardrails so that you can prevent problems before they happen. Kubernetes lets you inject custom rules and regulations right into the API server (via Admission Control) that enforce an unprecedented level of control. And because Kubernetes provides a uniform way of representing resources that used to be contained in silos (e.g. compute, storage, network), you can impose cross-silo controls. 3) Take your policy out of PDFs and put it into the code. When your infrastructure is code, and your apps are code, so too should your policy be code. The business needs developers to push code rapidly — to improve the business’s software faster, ideally, than competitors — but the business also needs that software to follow the same age-old operations, security, and compliance rules and regulations. The only way to succeed at both is to automate the enforcement of those rules and regulations by pulling them out of PDFs and wikis and moving them into the software. That’s what policy-as-code is all about.
- Ensure the application is built as a set of independent microservices that are loosely coupled to serve the business. This helps get the most out of Kubernetes. Ensure microservices have built-in resilience (to handle failures), observability (to monitor application), and administrative features (to allow for elastic scaling, data backup, access control, and security, etc.). Essentially, having the application architected the right way is critical to reaping the benefits of Kubernetes.
- One of the most important elements is ensuring K8s remain simple enough for developers to use. Developers are growing more committed to Kubernetes: in 2016, just under half said they were committed to the technology but by 2017, 77 percent said the same. Despite Kubernetes’ growing popularity, it is still often challenging for developers to manage manually. Our approach focuses on ensuring that clusters are configured for high availability, stability, and best practices. Kubernetes has many knobs that can be turned to limit resources, segregate components, and configure the way the system performs. It can be challenging to do this on your own so we have worked hard to provide users with a platform that has best practices baked in from the start.
Here’s who shared their insights:
- Dipti Borkar, V.P. Product Management & Marketing, Alluxio
- Matthew Barlocker, Founder & CEO, Blue Matador
- Carmine Rimi, Product Manager Kubernetes, Kubeflow, Canonical
- Phil Dougherty, Sr. Product Manager, DigitalOcean
- Tobi Knaup, Co-founder and CTO, D2iQ
- Tamas Cser, Founder & CEO, Functionize
- Kaushik Mysur, Director of Product Management, Instaclustr
- Niraj Tolia, CEO, Kasten
- Marco Palladino, CTO & Co-founder, Kong
- Daniel Spoonhower, Co-founder and CTO, LightStep
- Matt Creager, Co-founder, Manifold
- Ingo Fuchs, Chief Technologist, Cloud & DevOps, NetApp
- Glen Kosaka, VP of Product Management, NeuVector
- Joe Leslie, Senior Product Manager, NuoDB
- Tyler Duzan, Product Manager, Percona
- Kamesh Pemmaraju, Head of Product Marketing, Platform9
- Anurag Goel, Founder & CEO, Render
- Dave McAlister, Community Manager & Evangelist, Scalyr
- Idit Levine, Founder & CEO, Solo.io
- Edmond Cullen, Practice Principal Architect, SPR
- Tim Hinrichs, Co-founder & CTO, Styra
- Loris Degioanni, Founder & CTO, Sysdig
Opinions expressed by DZone contributors are their own.