Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

GitOps Workflows for Istio Canary Deployments

DZone's Guide to

GitOps Workflows for Istio Canary Deployments

Read how GitOps workflows can be implemented to rollout and manage non-atomic canary releases to an Istio service mesh.

· DevOps Zone ·
Free Resource

The need for DevOps innovation has never been greater. Get the results from over 100 business value assessments in this whitepaper, Digital Darwinism: Driving Digital Transformation, to see the positive impact of DevOps first hand.

Follow this tutorial and learn how to control and manage a canary deployment to Istio using GitOps workflows.

Canary deployments or releases are used when you want to test some new functionality with a subset of users. Traditionally you may have had two almost identical servers: one that goes to all users and another with the new features that gets rolled out to only a set of users.

But by using GitOps workflows, your canary can be fully controlled through Git. If something goes wrong and you need to roll back, you can redeploy a stable version all from Git. An Istio virtual gateway allows you to manage the amount of traffic that goes to both deployments. With both a GA and a canary deployed, you can continue to iterate on the canary release until it meets expectations and you are able to open it up to 100% of the traffic.

Istio Canary Deployment Overview

In this scenario, you will have two different manifests checked into Git: a GA that is tagged 0.1.0 and the canary, tagged 0.2.0. You will then use Git and Weave Cloud to automate the deployment of patches for these releases. By altering the weights in the manifest of the Istio virtual gateway, the percentage of traffic for both of these deployments is managed.

Finally you will use Weave Cloud to automatically detect and deploy the new patches for the GA and the canary to your cluster. You will then monitor the canary release for performance through the request latency graph presented by Weave Cloud.

Image title

GitOps Workflows for the continuous deployment to Istio:

  • An engineer fixes the latency issue and cuts a new release by tagging the master branch as 0.2.1
  • GitHub notifies GCP Container Builder that a new tag has been committed
  • GCP Container Builder builds the Docker image, tags it as 0.2.1 and pushes it to Quay.io (this can be any container registry)
  • Weave Cloud detects the new tag and updates the Canary deployment definition
  • Weave Cloud commits the Canary deployment definition to GitHub in the cluster repo
  • Weave Cloud triggers a rolling update of the Canary deployment
  • Weave Cloud sends a Slack notification that the 0.2.1 patch has been released

Once the Canary is fixed, keep increasing the traffic to it and shift the traffic from the GA deployment by modifying the weight setting and committing those changes to Git. With each Git push and manifest modification, Weave Cloud detects that the cluster state is out of sync with the desired state and will automatically apply the changes.

If you notice that the Canary doesn't behave well under load, revert the changes in Git. Weave Cloud rolls back the weight setting by applying the desired state from Git on the cluster.

You can keep iterating on the canary code until the SLA is on a par with the GA release.

Prerequisites

To run through this tutorial for Istio canary deployments you will need the following:

  • A Kubernetes cluster on GKE (or on another cloud provider).
  • To `git clone` the cluster config Git repo that contains the desired state of your cluster.
  • A Weave Cloud account (it’s free for the first 30 days).

#1. Setup a Kubernetes cluster in your choice of cloud provider.

In this tutorial, we use Google Kubernetes Engine (GKE) on the Google Cloud Provider (GCP) who provide an extremely easy way to spin up Kubernetes in minutes. You can use the free tier.   


#2. Clone the tutorial repo:

git clone https://github.com/stefanprodan/gitops-istio


#3. Install Istio to the new cluster:

Download the latest release:

curl -L <a href="https://git.io/getLatestIstio">https://git.io/getLatestIstio</a> | sh -


Add the istioctl client to your PATH:

cd istio-0.7.1
export PATH=$PWD/bin:$PATH


Install Istio services without enabling mutual TLS authentication:

kubectl apply -f install/kubernetes/istio.yaml

#4. Set Istio automatic sidecar injection

Generate certs:

./install/kubernetes/webhook-create-signed-cert.sh \
    --service istio-sidecar-injector \
    --namespace istio-system \
    --secret sidecar-injector-certs


Install the sidecar injection configmap:

kubectl apply -f 
install/kubernetes/istio-sidecar-injector-configmap-release.yaml


Set the caBundle in the webhook install YAML that the Kubernetes api-server uses to invoke the webhook:

cat install/kubernetes/istio-sidecar-injector.yaml | \
    ./install/kubernetes/webhook-patch-ca-bundle.sh | \
    install/kubernetes/istio-sidecar-injector-with-ca-bundle.yaml


#5. Install the Weave Cloud Agents to your cluster and connect your Git repo to Weave Cloud.

Sign up to Weave Cloud and install the agents:

  1. Select ‘Connect a Cluster’ from Weave Cloud.
  2. From Weave Cloud select Platform → Kubernetes,  Environment → Google Container Engine and then copy the command shown to you in Weave Cloud.

Image title3. Open the GKE terminal and paste that command into the terminal. Weave Cloud indicates that the Explore and Monitor agents are now connected. Click on Explore to check out your cluster and Istio.

Now you need to complete the Deploy setup and connect your Git repo to Weave Cloud:

  1. Click the cog icon from the toolbar and then Deploy from the menu that appears. 

Image title


2. Follow the instructions here to connect to your repo and complete the Weave Cloud set up.  (you’ll need to copy and paste another command into your GKE terminal).

Once everything is setup correctly, click Deploy to see your repo:

Image title

#6. Explore the deployments

When you’ve connected to your repo, any manifests that Weave Cloud finds are deployed automatically to the cluster. Go to the explore section and select the namespace ‘test’ to see all of the services in the cluster.

The namespace definition is included in the manifests and when deployed by Weave Cloud, it creates the test namespace in the cluster.

It is the equivalent of running the following on the cluster:

kubectl create namespace test
Label the test namespace with istio-injection=enabled:
kubectl label namespace test istio-injection=enabled


Also, the podinfo ga and canary deployments were automatically deployed by Weave cloud:

ubectl -n test apply -f 
./cluster/podinfo/ga-deployment.yaml,./cluster/podinfo/canary-deployment.yaml,./cluster/podinfo/service.yaml


And the istio destination rules, the virtual service and its gateway were applied with:

kubectl -n test apply -f ./cluster/podinfo/destination-rule.yaml
kubectl -n test apply -f ./cluster/podinfo/virtual-service.yaml
kubectl -n test apply -f ./cluster/podinfo/gateway.yaml


#7. Run the loadtest

To add some traffic to your canary tests, run the following in the GKE terminal:

kubectl -n test exec -it loadtest -- sh
#Start the load test inside the container:
hey -n 1000000 -c 2 -q 5 http://podinfo.test:9898/version


#8. Run the GA and canary deployments

To begin with, all of the traffic is routed to the GA deployment:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: podinfo
  namespace: test
spec:
  hosts:
  - podinfo
  - podinfo.co.uk
  gateways:
  - mesh
  - podinfo-gateway
  http:
  - route:
    - destination:
        name: podinfo.test
        subset: canary
      weight: 0
    - destination:
        name: podinfo.test
        subset: ga
      weight: 100


Image title

Canary warm-up

Edit the manifest (podinfo-canary.yaml) for the canary deployment by route 10% of the traffic to it and commit the change to Git:

 http:
 - route:
   - destination:
       name: podinfo.test
       subset: canary
     weight: 10
   - destination:
       name: podinfo.test
       subset: ga
     weight: 90


Once the image has built, deploy it through the GUI in Weave Cloud by filtering on tags and then selecting the podinfo-canary image (if you don’t want to do this manually, set the workload to auto-deploy):

Image title

Image title

Click on one of your workloads where you'll see the dry run screen:

Image title

Select 'View workload metrics' to view the workload dashboard:

Image title

Monitor the traffic from Weave Cloud's built-in instant dashboards. Select the 'test' namespace from the dropdown.

Image title

Canary promotion

Increase the canary traffic to 60% and again commit the change:

 http:
 - route:</code>
   - destination:
       name: podinfo.test
       subset: canary
     weight: 60
   - destination:
       name: podinfo.test
       subset: ga
     weight: 40


Add the following PromQL view total requests for the canary vs ga:

sum(rate(http_requests_total{_weave_service=~"podinfo-.*"}[1m])) by (_weave_service)




Image title

Full promotion, 100% of the traffic to the canary:

 http:
 - route:
   - destination:
       name: podinfo.test
       subset: canary
     weight: 100
   - destination:
       name: podinfo.test
       subset: ga
     weight: 0

Image title

Measure requests latency for each deployment in Weave Cloud:

Add the promQL to see latency between the two deployments:

histogram_quantile(0.99,
sum(rate(http_requests_bucket{_weave_service=~"podinfo-.*"}[10m])) by (le, _weave_service))

Image title

Observe the traffic shift with Weave Cloud explore:

Image title

Final Thoughts

This tutorial demonstrated how to roll out and iterate on a canary deployment running on Istio using GitOps workflows and Weave Cloud.

To read more about GitOps see our four-part series on the topic.

Interested in Kubernetes but unsure where to start? Check out this whitepaper, A Roundup of Managed Kubernetes Platforms from Codeship by Cloudbees, for an overview and comparison of Kubernetes platforms. 

Topics:
devops ,istio ,workflow ,git commits ,gitops ,canary deploy

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}