Canary Releasing on Kubernetes Using Vamp and ACS
Canary Releasing on Kubernetes Using Vamp and ACS
Set up your own canary releases using Kubernetes, a well-integrated infrastructure using the open source Vamp, and Azure Container Service.
Join the DZone community and get the full member experience.Join For Free
Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.
In this article, you'll learn how to run Vamp on Kubernetes and Azure ACS and start canary releasing in minutes.
Releasing containerized application workloads on Kubernetes is almost too easy, and Kubernetes comes with some powerful release patterns out of the box. There are also already some great resources out there describing interesting blue/green and canary release deployment scenarios with Kubernetes — the official docs have a section that proposes a canary release strategy and this write up also talks about a very similar scenario.
In essence, the proposed strategy in most cases is to spin up as many replicas as you need to reflect the user distribution you want, i.e. if you want to have 1:3 of your users hit the "canary" version of your app and 2:3 hit the "stable" version, you spin up two stable versions and one canary version. You then map these replicas to the same (ingress) service and you're done.
This completely works and is totally valid in small scale, stateless, and very general situations. It has its weaknesses, however:
- Achieving a distribution is directly related to how many resources you are using in your cluster. Need to run 50 replicas just to serve stable traffic? Then you need 25 replicas of the canary version to achieve a 1:3 balance.
- You can only distribute traffic based on the percentage of the total traffic.
- There is no stickiness. Users can flip-flop between stable and canary versions on each request. This might break your app.
So, if you want to take it up a notch and gain more flexibility in programmatic routing and workflow, you'll probably want to check out Vamp. Luckily, getting up and running with Vamp and Kubernetes is incredibly easy and quick,especially when running on Azure Container Service, as this enables some neat and out-of-the-box load balancer and endpoint integration.
In this post, we'll walk you through all the initial steps to get up and running.
- Setup a Kubernetes cluster on ACS.
- Install Vamp and CLI tools
- Run a first canary release using a combination of both tools.
Full disclosure: Our dev team is still finalising the full Kubernetes integration. 95% of Vamp's feature work extremely well with Kubernetes, but we have some open bugs that we still need to squash.
What Is Vamp?
Vamp is an open source, self-hosted platform for managing (micro)service-oriented architectures that rely on container technology. Vamp provides a DSL to describe services, their dependencies, and required runtime environments in blueprints.
Vamp takes care of route updates, metrics collection, and service discovery so you can easily orchestrate complex deployment patterns, such as A/B testing and canary releases.
Kubernetes and Vamp Installation
If you have your Azure account set up and credentials in place, use the following script to bootstrap a Kubernetes cluster and install Vamp. The steps in this script are described below.
# Setup Azure Container Service with Kubernetes az group create --name myVampResourceGroup --location westeurope az acs create --orchestrator-type kubernetes --resource-group myVampResourceGroup --name myVampK8SCluster --generate-ssh-keys az acs kubernetes install-cli az acs kubernetes get-credentials --resource-group myVampResourceGroup --name myVampK8SCluster # Install Vamp curl -s https://raw.githubusercontent.com/magneticio/vamp.io/master/static/res/v0.9.5/vamp_kube_quickstart.sh | bash kubectl proxy
1. Setup Azure Container Service With Kubernetes
Microsoft has done an excellent job in providing a very easy and quick Kubernetes setup with their Azure Container Services. It takes just a handful of commands to get going. You need an active Azure subscription and the Azure command line interface installed to run the commands.
2. Install Vamp
Integrating Vamp into Kubernetes is made delightfully simple using our install script. It talks directly to
kubectl and sets up Vamp and its dependencies. Read the full source of the install script here.
The script should finish with the following output and an SSH tunnel on port 8001, connecting you to your Kubernetes host on ACS.
A Quick Overview of Vamp on Kubernetes
Open a browser, navigate to
http://localhost:8001/ui/ and go to the workloads tab. You will a see all Vamp components installed and running.
- Daemon Sets: To facilitate smart routing logic, Vamp relies on having its routing component, the Vamp Gateway Agent (VGA), on every node.
- Deployments: Vamp's components are described in Kubernetes deployments, which in turn describe the replica sets and pods. We can see one instance of the Vamp application, four instances of the Vamp Workflow Agent (used for background jobs) and Elasticsearch and Kibana installation, used for collecting metrics.
- Pods: The pods show the actual runtime state of the described deployments. If all things are good, they should all be in the "Running" state.
- Replica Sets: Replica Sets describe the scaling behavior of a specific set of Pods. In our case, there is no scaling and they should match the Pods.
kubectl get services vamp NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) vamp LoadBalancer 10.0.215.28 22.214.171.124 8080:32132/TCP
In our example, this means:
Deploy and Do a Canary Release
We're going to perform a simple canary release using the Vamp CLI. First, install it and set the
VAMP_HOST environment variable to Vamp's address.
npm install -g vamp-cliexport VAMP_HOST=http://126.96.36.199:8080
Use the following script to insert two Vamp blueprints:
# Create two blueprints using the Vamp CLI curl -s https://raw.githubusercontent.com/magneticio/simpleservice/master/blueprints/service_100.yml | vamp create blueprint --stdin curl -s https://raw.githubusercontent.com/magneticio/simpleservice/master/blueprints/service_110.yml | vamp create blueprint --stdin
Then deploy version 1.0.0 of our simple service...
vamp deploy simpleservice:1.0.0 simple_dep
…and check if our deployment is done.
vamp list deployments NAME CLUSTERS PORTS STATUS simple_dep simpleservice simpleservice.web:40001 Deployed
Vamp integrates directly with Kubernetes LoadBalancer service types by setting up a service and external endpoint. We provide the selector
io.vamp.gateway=simple_dep_9050 to filter for the right data where
simpleDep is the name we gave to our deployment and
9050 is what we defined as our gateway in the blueprint for simpleservice version 1.0.0. If the
<pending> , please be patient as the environment bootstraps the necessary infrastructure. Luckily, this only needs to happen once as the
9050 is our stable endpoint.
kubectl get services --selector=io.vamp.gateway=simple_dep_9050 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) 51.. LoadBalancer 10.0.44.213 188.8.131.52 9050:31048/TCP
But it doesn’t end there. The Kubernetes LoadBalancer service is in turn now also integrated into Azure’s Loadbalancer, creating a load balancing rule and health probe, exposing our service to the Internet in a reliable fashion.
Now hitting our endpoint a couple of times should yield the following screen.
But it doesn't end there. The Kubernetes LoadBalancer service is in turn now also integrated into Azure's Loadbalancer, creating a load balancing rule and health probe, exposing our service to the internet in a reliable fashion.
Having said all that, our service is now reachable on
184.108.40.206:9050 , as reported by
kubectl . Open a browser and you should get:
As you might have noticed, next to the LoadBalancer service, this deployment will also show up as a KubernetesDeployment, Pod, and ReplicaSet. This demonstrates how Vamp uses the native scheduling and resource management of Kubernetes.
The deployment of 1.0.0 is done. We now merge version 1.1.0 into our existing deployment and expose it to 10% of traffic.
vamp merge simpleservice:1.1.0 simple_dep
We can now have a look at the internal gateway that Vamp has set up, and that allows us to migrate traffic to our new version. This internal gateway is completely separate from the external one described above, neatly separating the stable ingress endpoints and the internal dynamic routing.
vamp describe gateway simple_dep/simpleservice/web Name: simpleDep/simpleservice/web Type: internal Port: 40001/http Service host: 10.0.13.125 Service port: 31071/http Sticky: false ROUTE WEIGHT CONDITION STRENGTH TARGETS simple.../1.1.0/web 0% - 0% 10.244.0.18:3000 simple.../1.0.0/web 100% - 0% 10.244.2.14:3000
Now let's update the routing and assign 70% to version 1.0.0 and 30% to version 1.1.0.
vamp update-gateway simple_dep/simpleservice/web --weights simple_dep/simpleservice/simpleservice:1.0.0/web@70%,simple_dep/simpleservice/simpleservice:1.1.0/web@30%
Now, hitting our endpoint a couple of times should yield the following screen.
You can, of course, take much smaller steps then 70/30, as long as the number add up to 100. Also, there is no hard limit on the number of services you split up.
Wrap Up and Next Steps
- Setting conditions on gateways using the Vamp CLI →
- Using Vamp condition short codes →
- Using Vamp in a continuous integration pipeline. This blog post was written for DC/OS, but Vamp is platform agnostic so it should work fine on Kubernetes too.
Setting weights on gateways is just one way of doing canary releases. Using Vamp's conditions you can influence traffic based on HTTP headers like Cookies, User-Agents etc. As this is not Kubernetes specific we won't dive into that in this write up, but here are some links for further reading:
Published at DZone with permission of Tim Nolet . See the original article here.
Opinions expressed by DZone contributors are their own.