Overcome Challenges of Continuous Delivery for Kubernetes With Spinnaker
Kubernetes has a lot of potential to help you deploy and delivery your code changes quickly - but there are some challenges while doing continuous delivery for K8s.
Join the DZone community and get the full member experience.Join For Free
Kubernetes is the leading container orchestration system, and it has a vast ecosystem of open-source and commercial components built around it. Today, in the wake of the pandemic, even more enterprises are considering Kubernetes a central part of their IT transformation journey. Kubernetes is a great container management tool because it offers:
- Automated bin packing
- Scaling and self-healing containers
- Service discovery
- Load balancing
However, using Kubernetes alone may not solve the purpose of becoming agile because it was never meant to be a deployment system. This blog will highlight some of the challenges of using Kubernetes and, based on our work with hundreds of organizations, how to avoid the challenges to unlock the full potential of cloud-native.
Challenges in Deployments While Adopting Kubernetes
Deployment Complexities and Use of Scripts
Deploying an application into Kubernetes is not straightforward as it involves a lot of manual scripts. For instance, an engineer has to create a Kubernetes Deployment manifest in YAML or JSON format (shown below) file and write kubectl commands to deploy an application:
While this may look easy for (some experts) in a single deployment, it becomes an arduous task when the aim is to perform multiple deployments into dev/QA/Prod per day. It also requires substantial familiarity with Kubernetes, which not all members of the team will have. More often than not, organizations end up using scripts and kubectl commands that slow deployment.
A recent survey of 1500 respondents by CNCF reports that complexity and cultural change with respect to using and deploying containers remain top challenges for Kubernetes adoption.
Overdependent on Experts and Developer Burnout
Lack of knowledge and expertise on Kubernetes makes developers and application teams heavily dependent on the DevOps team (also referred to as the release team) to continuously help them create K8S objects such as Deployments, Replicasets, StatefulSets, and DaemonSets. The process of follow-up, collaboration with various groups to get the changes deployed takes away a lot of time from all stakeholders. Furthermore, with shorter deadlines and immense pressure to meet business objectives, the team has to spend long hours to deploy their changes.
As per a report by D2IQ almost all organizations (96%) face challenges and complexities during initial deployments of containerized applications ( called Day 2 Operations in DevOps language), and cite Kubernetes as a source of pain. The report also shares, “51% of developers and architects say building cloud-native applications makes them want to find a new job.”
This is extremely stressful for senior IT leaders who are responsible for deploying containerized applications for their organizations.
Tough Security Challenges
Kubernetes is not meant to enforce policies, for example finding vulnerabilities in the images. So if you use Kubernetes for deployments, you need to find a different way – typically manual policy enforcement or with some scripting.
For example, based on the default network policy, Kubernetes pods can communicate with each other and external endpoints to operate seamlessly. Due to application or infrastructure security issues, if one container or pod is breached, all others can be attacked (also known as complex attack vectors).
With organizations giving priority to speed in software delivery, security and compliance sometimes are relegated to just an afterthought. Typically, during the adoption of Kubernetes, organizations must try to integrate security and compliance in their build, test, deploy, and production stages.
Lack of Deployment Strategies and Post-Deployment Health Checks
One common purpose of using Kubernetes-based applications is to scale to large user bases on demand. In production environments, you can observe many nodes, hundreds of pods, and thousands of containers running multiple application instances.
One way to introduce a new change to the live audience is through gradual deployment, i.e., strategies such as blue/green or canary. In this way, we can avoid the risk of releasing an unstable version to the end-customer.
However, blue/green and canary are absent in Kubernetes. On top of that, because of the distributed nature of containerized applications, it is cumbersome and complex to fetch and send updates on a newly deployed Kubernetes app’s health status and estimate its vulnerabilities and risk to the organization.
Kubernetes Deployment With Open Source Spinnaker
Spinnaker is a open source and multi-cloud continuous delivery platform for releasing code fast and staying ahead of the competition. Spinnaker treats Kubernetes applications as first-class citizens and helps the IT team deploy the application into any Kubernetes ( K8S, GKE, EKS, AKS) rapidly. Following are some important Spinnaker capabilities.
Spinnaker Pipeline for End-to-End Deploy Automation
Spinnaker offers end-to-end pipelines for automatic deployments (refer to the figure below). The workflow constructs for consistent and repetitive deployment and can bake Amazon Machine Images (AMI) or Docker Images at will, finding k8S containers from clusters and deploying, modifying cluster groups, and running a container in K8S.
Post deployments, Spinnaker checks and displays the health of Kubernetes clusters in real-time. Deployment stages in the pipeline can be configured to notify stakeholders by email, mobile messages, or Slack messages at all levels.
Inbuilt Deployment Strategies
Open source Spinnaker offers various deployment strategies like blue/green, rolling blue/green or HighLander, and canary to reduce the risk of deploying into production. Spinnaker also interacts with K8S pod auto-scalers to ensure that capacity dimensions are maintained during deployment.
Automated Canary Analysis (ACA), technique offered by open source Spinnaker, can be leveraged to minimize the risk of deploying a new update into the K8S production server by comparing metrics and logs emitted from the old version to the metrics and logs created by a small deployment of the newer version.
Besides that, Spinnaker also provides versioning of configmaps and secrets along with immutable server deployments. This allows rollback to preserve the exact config used previously along with binaries of execution.
Continuous Verification of New Artifacts
Numerous enterprise plugins are available around open source Spinnaker which extends the use-case of the free tool. For example OpsMx, the distribution partner of open source Spinnaker, offers enterprise Spinnaker- that uses AI/ML on logs and metrics to detect issues of newly deployed Kubernetes applications. In case an anomaly is detected, the Spinnaker can rollback the new application. But before that, it ensures that the previous server group is adequately sized and encounters 100% traffic before disabling and deleting the recently created server.
Enforcing Security and Compliance Into Kubernetes deployment
Ensuring security in your Kubernetes usage requires infusing security and compliance gates throughout the software development lifecycle — from coding to build up to deployment. For example you can use a security gate in the build stage to fail a deployment if a build fails a smoke test. Similarly, security gates can be installed to check if a container image has passed an image scanning report. Open source Spinnaker allows security managers to install security gates in delivery pipeline.
Spinnaker is extended to declare policies to adhere to organizational guidelines and industry standards such as PCI-DSS, HIPAA, and SOC 2. For example, release managers can define deployment date and time as a part of a Blackout-window policy. There can be certain peak traffic times that should be averted from deploying code with a downtime risk. Deployment windows allow pipelines to ensure deployments into K8S happen outside of these peak traffic time zones and not impact the customer experience.
GitOps Style Deployments
Organizations can perform GitOps style deployments into Kubernetes clusters with the help of Spinnaker. Teams familiar with YAML can make changes to the file, and Spinnaker can be configured to detect those changes and make deployment autonomously into chosen environments.
The path to achieving business agility is undeniably through adopting cloud-native applications, and Kubernetes plays a central role. Organizations that choose to combine Kubernetes and Spinnaker are more likely to see positive results faster than working with Kubernetes alone.
Published at DZone with permission of Debasree Panda. See the original article here.
Opinions expressed by DZone contributors are their own.