{{announcement.body}}
{{announcement.title}}

TechTalks With Tom Smith: Executive Insights on Kubernetes in the Enterprise

DZone 's Guide to

TechTalks With Tom Smith: Executive Insights on Kubernetes in the Enterprise

Security, planning, knowledge, and data locality are four keys to success with Kubernetes deployments in the enterprise.

· Microservices Zone ·
Free Resource

Tom Smith

Interviewer to the stars.


To understand the current and future state of Kubernetes (K8s) in the enterprise, I gathered insights from IT executives from 22 companies. Here’s what I learned:

Security, planning, people with Kubernetes’ (K8s) skill, and data locality are four of the keys mentioned most frequently for the successful adoption and implementation of K8s. Think about how to claim and reclaim storage resources to deal with security, performance, reliability, and availability — all of the traditional data center operations concerns. You should have the same concerns for K8s as when you put software into production — security, monitoring, and debugging.

You may also like: TechTalks With Tom Smith: Using Machine Learning in Software Development — Keys to Success

Build your environment to be specific to a purpose, not to a location. Have a plan driven by your goals. Start with people that have knowledge of K8s that will work well together when services are divided among teams. The team needs to know what’s going on across the landscape as well as understand what’s required for day two operations – upgrades, patches, disaster recovery, and scale.

Think about how to handle state whether it’s using stateful sets leveraging your provider’s block storage devices, or moving to a completely managed storage solution, implementing stateful services correctly the first time around is going to save you huge headaches.

Kubernetes has made it easier to scale and achieve speed to market in a vendor-agnostic way. We’re seeing deployments in production at scale with thousands, tens, and hundreds of thousands of containers implementing microservices. K8s has provided the infrastructure that’s more stateless, self-healing, and flexible. K8s enables teams to scale production workloads and fault tolerance not previously possible.

K8s are faster to scale and deploy, more reliable, and offer more options. It lets both the application and platform teams move more quickly. Application teams don’t need to know all of the details and platform teams are free to change them.

A single configuration can be propagated across all clusters. K8s features a ubiquitous platform and cloud provider support. K8s provides a completely uniform way of describing containers and all other resources needed to run them: databases, networks, storage, configuration, secrets, and even those that are custom-defined.

K8s enhances the security of containers via roll-back access control (RBAC), reducing exposure, automation, and network firewall policies. K8s solves more problems than it creates with regards to security. RBAC enforces relationships between resources like pod security policies to control the level of access pods have to each other. K8s provides access and mechanisms to use other things to secure containers.

The security benefits of containers and K8s outweigh the risks because containers tend to be much smaller than a VM running in NGINX, which will have a full operating system with many processes and servers. Containers have far less exposure and fewer attack surfaces.

By automating concepts and constructs of where things go using rules and a stabilized environment, you eliminate a lot of human errors that occur in a manual configuration process. K8s standardizes container deployment — set it once and forget it.

Due to the increased autonomy of microservices deployed as pods in K8s, it’s important to have a thorough vulnerability assessment on each service, change control enforcement on the security architecture, and strict security enforcement is critical to defending against security threats. Things like automated monitoring-auditing-alerting, OS hardening, and continuous system patching must be done.

K8s use cases tend to be related to speed, CI/CD pipelines, cost reduction, legacy modernization, and scale. Building on top of K8s has helped move faster with a smaller team. It accelerates every phase of the application lifecycle and reduces time to market. It helps automate DevOps tasks and builds in best practices quickly. K8s makes it possible to adopt continuous development and deployment by ensuring deliveries are made to the right place at the right time.

Better performance results in more cost savings and K8s help reduce infrastructure costs. K8s helps reduce technical debt as we pursue legacy containerization/modernization. There’s also automatic scale-in and -out to adjust quickly to application workload demands to maintain integrity when scaling.

The most common failures are around lack of skills/knowledge, complexity, security, and day two operations. A lot of it is a skill. K8s talent is very hard to find and retain. There is a lack of understanding of how K8s functions. A common challenge is how to get a team up to speed quickly. Recruit experts with the depth of knowledge of K8s and DevOps required to create proper tools and implement application workflows in containerized environments.

People give up on implementations because it’s too hard. There are significant complexity and a steep learning curve. People underestimate the complexity of installing and operating K8s. It’s easy getting started but then people are surprised by the complexity when they put it into production with security and monitoring in place.

Enterprises can find it challenging to implement effective security solutions. We see security and operations failure arise because teams don’t implement any policy around the creation of external load balancers/ingresses. We see failures around security with new vulnerabilities weekly that require patches. Day two operations like upgrades and patches need to be managed. There’s an ongoing need to deploy persistent storage, monitor and alert on failure events, and how to deploy applications across multiple K8s clusters.

Concerns regarding the current state of K8s revolves around complexity, security, and finding people with sufficient skills. Complexity is a big issue. Deploying K8s is a relatively new practice and it can be very challenging to pick the right set of technology and tools.

Security controls are lagging behind and newcomers may adopt inadequate K8s security measures allowing attackers using increasingly sophisticated exploits to succeed. You cannot assume that managed K8s offerings are somehow inherently secure, or that by clamping down on access to CI/CD to just a few DevOps people, that the risk has been avoided.

People who try to implement K8s on their own have trouble maintaining their own platform. People have a tendency to assume they need K8s when they don’t. A lot of people are flying blind, running random containers with third parties without monitoring them. People assume its self-healing and ignore the details.

K8s is destined to become the platform for developers driven by the adoption of the cloud and IoT. It will become the standard platform for running applications whether on an IoT device up to the cloud similar to the excitement that was created with Java. The future is in IoT with K8s enabling communication and rollbacks. You’ll be able to make IoT device nodes in a larger K8s cluster for faster updates and more services. The K8s cloud operating system will extend to hybrid, multi-cloud operating systems.

There will be more externalization of the platform and enterprise hardening of K8s. It will become more stable at the core while also becoming more extensible. The toolkit for K8s operators will capture more complicated lifestyle automation. Containers will eventually replace virtual machines and it will support other infrastructure further up the stack. The technology will be increasingly standardized, stable, and portable going forward. The adoption of open source strategy and K8s by businesses will continue to rapidly grow with an ecosystem back by leading internet tech companies and a growing developer community.

When working with K8s, developers need to keep in mind security, architecture, and DevOps methodology. Developers and DevOps engineers need to consider how best to secure their K8s environments. Developers should become familiar with the cloud-native computing foundation (CNCF) stack with K8s as the centerpiece, along with technologies like network meshes, permit use, and runtime security.

Figure out the best architecture to build around. Architect so your application can run on a different platform. Understand the need to be elastic on a cloud-native platform. Be explicit about the shape of your infrastructure. Be explicit with your manifests. Get to KubeCon. If you experience a problem, reach out to the community.

Leave large scale to ops teams or an external company. Focus on building cloud-native applications using cloud-native principles. Try to get DevOps automation done right. Have a good CI/CD process. Make the process as repeatable as possible. Use agile techniques. Do incremental development for fast feedback. Use the best of breed tools.

Here’s who shared their insights:


Further Reading

TechTalks With Tom Smith: What and Why of DataOps

TechTalks With Tom Smith: Tools and Techniques for Scaling DevOps

TechTalks With Tom Smith: How Machine Learning Has Changed Software Development

Topics:
microservices ,kubernetes ,security ,planning ,knowledge ,data locality ,enterprise ,k8s

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}