TechTalks With Tom Smith: What Devs Need to Know About Kubernetes
Get started with security, architecture, and CI/CD — there's much more after that.
Join the DZone community and get the full member experience.Join For Free
To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "What do developers need to keep in mind when working with K8s?" Here’s what we learned:
- Try K8s out in a small, simple, stateless environment. Stateful is more complicated. Understand the core concepts and move up from there. There are a lot of small configurations for which security needs to be considered. Make the process as repeatable as possible.
- I recommend developers become familiar with the stack of the cloud-native computing foundation (CNCF) with K8s as the centerpiece. All of the technologies like network meshes, permit use, and runtime security for K8s. Know what K8s is as well as the whole cloud-native ecosystem. Go is the language of K8s and cloud-native. Focus on your passion and focus your attention on the stack and Go.
- 1) Containers were made so developers could stop worrying about production. Think about the configuration of your application and communicate details to ops. 2) Understand and communicate your resource requirements. 3) Understand dependent services. Be aware of security crackdowns and the need to identify dependency.
- When taking legacy applications to K8s (or developing new apps), developers need to consider and re-architect their applications to run and take advantage of the highly distributed environment K8s provides. Many legacy applications and software products were designed to run in a scale-up architecture and are delivered as a single stack of tightly-coupled software (i.e. single binary executable). In order to leverage the distributed nature and scale-out benefits containers offer, developers need to redesign and break apart individual components and services so that they can run, scale, and be upgraded independently. Developers and DevOps deployment engineers will also need to consider how best to secure their K8s environments. Containers share an OS kernel and require a root level authorization (in Linux environments) in order to run and perform such tasks as accessing persistent storage. Attackers have the potential to extend their threat beyond the container and into an underlying OS and other containers if container security is not implemented.
- The K8s API is different from most modern APIs — It’s intent-based, so you can tell it what you want K8s to do, and not worry about how it should make that happen. It’s incredibly extensible, resilient, and powerful. However, this intent-based API presents challenges for security. None of the standard access control solutions (role-based access control, attribute-based access control, access control lists, or IAM policies) are powerful enough to enforce basic policies like who can change labels on a pod, or what image repositories are safe. K8s Admission Control was built into the API to solve this problem. Admission Controllers don't address access control issues out of the box, but they do allow you to use a Webhook to address authorization challenges that minimize risk over and above what can be done with just RBAC, for example.
- Before you build applications using K8s, figure out the right architecture to build around. The right architecture will make the adoption of K8s more successful. Follow best practices to get the most out of K8s. Surface problems earlier in the development cycle. Having a good CI/CD process allows you to get out of pain more quickly.
DevOps — CI/CD
- Containers and K8s enables developers to package applications and run them anywhere. It gives developers freedom. Developers should not try to operate K8s themselves as they tend to underestimate what is required. While it’s easy to develop and deploy locally, it becomes difficult at a larger scale. Leave large scale deployment to ops teams or an external company. Focus on building cloud-native applications using cloud-native principles. Focus on getting DevOps automation done right. Use agile techniques. Do incremental development for fast feedback. Do blue/green deployments. Use best-of-breed tools. Most are open source and companies provide support for them as well. Look at Helm and Docker repositories that are already out there. Leverage ready-made, easy-to-use tools.
- Developers don’t need to know about K8s. CI/CD needs to take care of it. If you are an infrastructure engineer, learn all of the tools around it. Write applications. Spin up the mini Kube and write the application. K8s is a DevOps issue.
- When building apps, avoid close coupling different microservices although it might be tempting as a quick hack to address a requirement. Automation is the key to the success of a K8s project, so, leverage everything that K8s management tools have to offer to build a fully automated CI/CD. Leverage managed services and open source tools to reduce time to deliver your app as they are easy to integrate for apps running on K8s. Use standardized APIs where possible to keep your application portable.
- The basics of K8s can be learned in an hour. The ability to master it will take the rest of your life. The best place to start is by reading DZone and looking at the Refcards. While the language is not complex, it’s not something you use on a daily basis. There is no need to memorize something you use once and awhile. It’s really easy to just get started. Clouds these days offer K8s built it so you can get started. K8s excels with scale and interaction, it’s easy to start with. Teach yourself the basics and learn from your own mistakes.
- Don’t throw away your rulebook. Keep trusting your intuition. Make sure networking is set up correctly. Set up appropriate request limits. All of the standard stuff still applies. Be explicit about the shape of your infrastructure. Be explicit with your manifests. Get to KubeCon, if you encounter a problem reach out to the community.
- Keep it simple. Try and reserve doing more complicated things until later in the process. Write software in the software module, add to a container and deploy in K8s.
- K8s will create a bunch of environment variables for you. Consequently, this can cause issues if your environment variables have the same names as the ones K8s is defining. So, my advice is to use names that are a little less ambiguous and more unique.
- The biggest thing to look out for with K8s is the assumption that it will magically solve all your infrastructure problems. What’s great about K8s is that it goes a long way toward isolating those problems (so that platform teams can solve them more effectively) but they’ll still be there. For example, in addition to OS upgrades, now you’ve got to master node upgrades as well. But the good news is that application developers don’t need to think about these issues anymore.
- Containers are a deployment method, but not a packaging method. You should still seek to structure your application in a cleanly packable way so that you can generate verifiable (signed) system packages and use these when building your container images. Containers are not a cure-all to your packaging woes. Also, end-to-end testing is a must with applications hosted on K8S, especially testing for performance regressions. You can’t rely on unit testing alone.
- Open-source technologies are driving a new transformation within the enterprise. Looking back at this new era of software that made microservices possible, like K8s, Elasticsearch, Kafka and so on, we see they all share the same common denominator: open source. While the vendors “pushed down” the previous generation of technologies, this new era of software is built by the teams. It’s a bottom-up disruption.
- Developers should think about why they are using K8s in the first place. To me, the biggest reason for developers to use K8s is to deploy new features with minimal downtime. It enables users to easily scale software because of its immutable and declarative nature. Developers looking for mature deployment and monitoring options, with quick and reliable response times are well-suited to use containers. Applications that are containerized and have a microservices architecture are ideal to run on K8s, such as video streaming and advertising.
Here’s who shared their insights:
- Dipti Borkar, V.P. Product Management & Marketing, Alluxio
- Matthew Barlocker, Founder & CEO, Blue Matador
- Carmine Rimi, Product Manager Kubernetes, Kubeflow, Canonical
- Phil Dougherty, Sr. Product Manager, DigitalOcean
- Tobi Knaup, Co-founder and CTO, D2iQ
- Tamas Cser, Founder & CEO, Functionize
- Kaushik Mysur, Director of Product Management, Instaclustr
- Niraj Tolia, CEO, Kasten
- Marco Palladino, CTO & Co-founder, Kong
- Daniel Spoonhower, Co-founder and CTO, LightStep
- Matt Creager, Co-founder, Manifold
- Ingo Fuchs, Chief Technologist, Cloud & DevOps, NetApp
- Glen Kosaka, VP of Product Management, NeuVector
- Joe Leslie, Senior Product Manager, NuoDB
- Tyler Duzan, Product Manager, Percona
- Kamesh Pemmaraju, Head of Product Marketing, Platform9
- Anurag Goel, Founder & CEO, Render
- Dave McAlister, Community Manager & Evangelist, Scalyr
- Idit Levine, Founder & CEO, Solo.io
- Edmond Cullen, Practice Principal Architect, SPR
- Tim Hinrichs, Co-founder & CTO, Styra
- Loris Degioanni, Founder & CTO, Sysdig
Opinions expressed by DZone contributors are their own.