Conciliate the Tangled Mesh Using ISTIO
ISTIO, an open-source service mesh platform, makes service to service communication secure, efficient, and reliable. Check out the architecture and components.
Join the DZone community and get the full member experience.Join For Free
We are into the Innovation and Post-Digital Revolution era where everyone in the IT world is trying to develop many advanced technologies, tools, and ways of application designing so as to fulfill the rise in business demands and solve the complexities across the IT ecosystem resulting in massive technology drift. Every day, we get to face new showstoppers when it comes to running our IT systems efficiently in a controlled fashion. Considering the available options to resolve it, we can get a whole new bunch of tools and technologies, as some organizations are dedicatedly working to make our life awesome.
Similarly, talking about the traditional way of developing our applications using the waterfall model and monolithic architecture, it was difficult to sustain when our systems became dynamic in nature. Clients did not want to wait for months of time to see their application in production, which gave birth to the agile development model. Monolithic architecture failed to prove its worth when it came to pushing new changes, as it was required to validate the whole application functionality. As we started facing these problems, multiple architecture patterns were evolved and microservice architecture is one of them. To sustain in this race and fulfill ever-changing and ever-increasing user demands, organizations must focus on:
- Streamlining and improving business process efficiency
- Intelligent IT & Information for Journey to New Tech
- Balancing Speed, Cost, Quality, and Risk
- Increasing Capacity to Innovate
New patterns and ways of designing applications are focused on making the end-user experience more wonderful, which increases many complexities and creates more challenges like:
- Maintaining Business Continuity
- Unpredictable Web Traffic
- Security Threats
- Operational Complexities
- Lack of Integration, Control, and Visibility
Microservice-based application architecture is one of the design patterns which was introduced to tackle a couple of above-specified challenges. This approach allowed the developers to split the application into multiple independent parts, but in turn, increased the operational strain. Though it addresses a couple of business issues, it did create some more challenges, when it came to interaction and control between microservices, which is called service mesh.
We needed a solution to control how different parts of the application can share data with one another in a controllable manner due to challenges like:
- No Encryption
- No retries, no failover
- No Intelligent load balancing
- No routing decisions
- No metrics or traces
- No default Access Control
To solve all these issues, ISTIO was developed.
What Is ISTIO?
ISTIO is an open-source service mesh platform. It is a dedicated layer that can be introduced to make service to service communication secure, efficient, and reliable.
It includes APIs that let it integrate into any logging platform or policy system. It is designed to run in a variety of environments, such as on-premise, cloud-hosted, in Kubernetes containers, and more. It provides behavioral insights and control over service mesh for microservices. Let us look at its architecture and components.
Istio service mesh is logically split into a data plane and control plane.
- Data Plane: It is responsible for handling the communication between the services and take care of the functionalities like Service Discovery, Load Balancing, Traffic Management, Health Check, etc.
- Pilot: It is responsible for providing all the configuration data from a centralized place.
- Citadel: It provides TLS Certs for all envoy proxy.
- Control Plane: Its responsibility is to manage all the components which allow for the central management of this service mesh.
Why Do I Need ISTIO?
It takes too much effort to address service mesh challenges at the application source code level. One possible solution is to add a proxy to every microservice which is called sidecar deployment and ISTIO does this magic for us. Now, instead of communicating directly to microservice, we can communicate via sidecars to solve service mesh challenges. ISTIO gives us:
- Automatic load balancing for HTTP, WebSocket, and TCP traffic.
- Full control of traffic behavior with advanced routing rules, retries, failovers, and fault injection.
- A plug and play policy layer to configure API supporting access controls, rate limits and quotas.
- Automatic metrics and logs for traffic.
- Secure service-to-service authentication with strong identity management between services in a cluster.
Now, after discussing ISTIO and the challenges it tends to solve, let us see this in action. The focus has been placed on Enabling Istio for Kubernetes based application, Advanced Routing, Canary Deployment, and Grafana Dashboard Setup to see service metrics.
In this article, we will see how we can deploy ISTIO into microservices running on the Kubernetes platform. For demo purposes, GKE- Google Kubernetes Engine has been used to host the Kubernetes Cluster.
There are certain pre-requisites that we need to ensure before we start our demo.
Pre-Requisites For Istio Setup
- Create a Kubernetes Cluster in GCP and enable Istio Beta. Please refer https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster.
- Grant cluster administrator (admin) permissions to the current user. To create the necessary RBAC rules, the current user requires admin permissions.
Get Account Name: gcloud config get-value core/account
Create Admin Role: kubectl create clusterrolebinding cluster-admin-binding –clusterrole=cluster-admin –firstname.lastname@example.org
Now, if all the pre-requisites are complete, it is time to validate the same.
Validating Istio Setup
- Getting the Kubernetes Cluster list and setting the GKE context.
gcloud container clusters list
gcloud container cluster get-credentials k8-istio (k8-istio is cluster name in my case)
Once executed, it will provide the list of clusters running in GCP.
- Validating Running Istio services.
kubectl get service -n istio-system
- Checking for pods running specific to Istio.
kubectl get pods -n istio-system
- Validating CRD’s.
Kubectl get crds
Once validation is complete, let us configure ISTIO.
To enable Istio for our applications running into Kubernetes, firstly, we need to create a namespace and enable Istio into it.
Creating namespace “istio” for this purpose.
kubectl create namespace istio
Enabling istio injection into the workspace.
Kubectl label namespace istio istio-injection-enabled
Here, I will deploy a sample Python application, which I have created for this purpose. Application1 takes the environment variable “VAR1” and displays it on the browser. To show how microservice interacts with each other in a controlled fashion, Application 1 will call Application2 URL to get the value of the environment variable and display the respective content onto the browser.
GitHub Link: https://github.com/mohitgargdevops/Istio-Demo
I have folders app1, app1-V2, and app1-V3 which are versions 1.0, 2.0, and 3.0 for “app1” microservice. Folder “app2” has a source code for the second microservice.
We need multiple versions of “app1” to showcase Istio advanced routing, Canary deployments, and other capabilities.
Now, first, we need to create Docker images for all these application versions and push the image to google container registry. Respective Docker files and requirements with required dependencies are also committed along with source code.
App1 Docker Image Creation, Tagging, and Pushing:
We need to execute specified commands for “app1” to build the image and push it to GCR. (k8-istio-266003 is GCR in my case)
docker build -t istioapp-1 .
docker tag istioapp-1 gcr.io/k8-istio-266003/istioapp-v1
docker push gcr.io/k8-istio-266003/istioapp-v1
In the same manner, we must run the commands for app1-v2, app1-v3 and app2 specified below.
app2 docker image creation:
docker build -t istioapp-2 .
docker tag istioapp-2 gcr.io/k8-istio-266003/istioapp2-v1
docker push istioapp2-v1
app1-v2 docker image creation:
docker build -t istioapp1-v2 .
docker tag istioapp-2 gcr.io/k8-istio-266003/istioapp1-v2
docker push istioapp1-2
app1-v3 docker image creation:
docker build -t istioapp1-v3 .
docker tag istioapp-2 gcr.io/k8-istio-266003/istioapp1-v3
docker push istioapp1-v3
Once we are done with above specified operation, we will be able to see all the images as part of google container registry.
Now, we will explore Istio capabilities in the sequence specified below:
- Istio Enabled Microservices
- Advanced Traffic Routing
- Canary Deployments
Istio Enabled Microservices
The client will connect to Load Balancer, which in turn will connect to istio-ingress controller and this controller will redirect the request to application1. I have committed Kubernetes files in GitHub “istiodemo.yml” and “istiodemo-gw.yml” for this purpose.
It creates the deployment and respective services for “app1” and “app2” respectively. Microservice “app1” will call the microservice “app2” URL using Kubernetes service name to get the value of the environment variable “VAR1” as we can see in Fig 1.13. Once resources are created and a response is obtained from the “app1” URL, we will be able to see the output which we got after the request from “app2” URL. If everything goes well, we must see “Hello world from istio”, where we will get “istio” value from microservice 2. Also, we do not define any istio proxy as it will be injected by default because we have enabled the same and this file looks the same as an ordinary deployment.
This will create a gateway and virtual service to enable the communication between the microservices. We need to create a virtual service just for “app1” as “app2” will be accessed directly by “app1” to get value of “VAR1” environment variable. As we will be creating the respective resources in our namespace “istio”, we must be able to see the desired result since we have already injected and validated the ISTIO parameters. Let us apply both the resources to create deployments, services, gateway, and virtual service.
kubectl apply -f istiodemo.yml
kubectl apply -f istiodemo-gw.yml
After running the above commands, we need to look at running pods.
kubectl get pods -n istio
Once we have pods running, we can get out load balancer IP by running the below command. We need this IP to access the application.
kubectl get svc-n istio-system
To validate if our application is working fine, let us try getting a response from “app1” by using Load Balancer IP 220.127.116.11:
Looking at the screenshot, we can get “Hello World from istio” as our output. This is how envoy proxy works as a sidecar container for communication between microservices.
Advanced Traffic Routing
In this demo, we will deploy the second version of “app1” – “app1-v2” with Kubernetes deployment name as “demo3”. We will create “demo4” as Kubernetes deployment where environment variable “VAR1=istiov2”. For purpose of Advanced routing, we need to set certain routing rules after deploying our required resources.
Referring “istiodemo2-routing.yml”, we need to create two subsets – v1 and v2. Let us look at the Kubernetes resource definitions:
In “istiodemo2.yml”, we have defined “VAR1=istiov2”. For “istiodemo2-routing.yml”, we have specified routing rules to control our traffic based on request.
We will route our request to “v1” version of “app1” if we pass host as “istio.demo.com” and our output will be “Hello world from istio”. If we pass header “end-user” as my name along with host, we will route our request to “v2” version of “app1” and get “Hello world from istiov2” as output.
Let us create specified Kubernetes Resources:
kubectl apply -f istiodemo2.yml
kubectl apply -f istiodemo2-routing.yml
Once resources are created, we can try accessing our application URL by passing the respective headers.
curl 18.104.22.168 -H “host: istio.demo.com”
Output: Hello World from istio
curl 22.214.171.124 -H “host: istio.demo.com/V2” -H “end-user: mohit”
Output: Hello World from istiov2
Istio also helps in case we need to do canary deployments, where we need to release multiple versions of the same application and route an only a certain percentage of traffic to access newer versions to get the end user feedback and then decide whether to roll-out or rollback the latest changes. Using istio, we can define respective weights and route limited user to a particular application version.
For this purpose, I have created version 3 of “app1”, created a docker image and pushed it to GCR. Once we have the image, we need to create a respective deployment for this version.
Shown above are the destination rules to achieve canary deployment, where 90% of users will reach version 1 of “app1” and 10% of users will be routed to version2 of “app1”. The host remains the same for both the deployments which is “demo1” in my case.
Once this resource is created, let us try accessing the application URL using for loop to invoke 10 requests in a row.
For ((i=1;i<=10;i++)); do curl 126.96.36.199 -H “host: istio.demo.com”; done
Output shows 9 out 10 tries connected to version 1 and one was routed to version 3 of application 1 maintaining the specified weights.
Grafana Dashboards for Service Metrics:
I have installed Grafana from the marketplace for Kubernetes cluster inside my “istio-system” namespace. Once we select the Applications option in our Kubernetes Engine, we can select Grafana and click on deploy. I have also changed the service from type “ClusterIP” to “LoadBalancer”, so that I can access the URL from the browser.
We can get Grafana URL using the command.
- Workloads must be running.
- Enable Monitoring API and Resource Manager API.
- Create Key for Service Account. Browse to IAM & admin, find respective account and click on create key. It will download a json file.
External-IP can be used to access grafana dashboard.
kubectl get svc -n istio-system
Once grafana dashboard is accessible, click on the data source, Provide a name, and select Type as “Stackdriver” and upload “Service Account key”.
Now, it is time to create some dashboards. As part of this article, we will create 4 Widgets:
Requests Rates by Service – Add Graph type panel and put in the parameters as per screenshot:
Latencies by Service - Add Graph type panel and put in the parameters as per screenshot:
Errors by Service - Add Graph type panel and put in the paratemeters as per screenshot:
Latentices by Service – Error Requests - Add Graph type panel and put in the paratemeters as per screenshot:
Once this is complete, we will be able to monitor our services efficiently.
We have seen that ISTIO is a wonderful solution when it comes to managing the communication between microservices in a controlled manner to make it secure, efficient and reliable. It looks like too much of a work to put this in operation, but actually, it is just about understanding the requirement and creating respective configurations. ISTIO will take care of the rest of the things. In case you wish to explore more, please refer ISTIO official documentation.
ISTIO official documentation:
Opinions expressed by DZone contributors are their own.