{{announcement.body}}
{{announcement.title}}

Introduction to Service Meshes on Kubernetes

DZone 's Guide to

Introduction to Service Meshes on Kubernetes

Services meshes provide an important layer between Kubernetes and microservices.

· Cloud Zone ·
Free Resource

Image title

The necessity of service meshes with Kubernetes and microservices.

Kubernetes has already solved the container orchestration problem. The remaining challenge of the cloud-native ecosystem is to make microservice delivery more efficient and resilient. This can be accomplished with service mesh technology.

Open-source service mesh projects like Istio, Envoy and Linkerd are gaining popularity in recent years. This post explores the basics of service meshes, the challenges of plain vanilla Kubernetes, and introduces several Kubernetes service mesh products.

What Is a Service Mesh?

A service mesh is a network infrastructure layer that controls and visualizes the communication between different parts of an application. Modern applications often work in this way. The network is separated into different parts. Each part is a “service.” Each service performs a specific business function.

A service may need to request data from other services in order to execute its function. Often, some services get overloaded with requests. This is where a service mesh is useful. A service mesh optimizes the communication between the different parts by routing requests from one service to another.

Service mesh components include:

  • Control plane—responsible for the proxy configuration, policy managers and the authority of TLS certificates. The control plane collects all the network metrics. Some implementations of service meshes are also able to trace services.
  • Data plane—consists of lightweight proxies. The proxies are distributed as sidecars. Proxies include Envoy or NGINX. You can use a data plane to build your own Kubernetes service mesh.

Challenges of Plain Vanilla Kubernetes and How Service Meshes Can Help

When you are using a plain vanilla Kubernetes cluster instead of a service mesh, you will run into the following problems:

Secure Communication Between Servers

Plain Kubernetes does not encrypt traffic between cluster nodes. As a result, the communication between services is not secure. You can make the communications between Kubernetes services more secure with TLS certificates.

Using TLS means that your DevOps team will have to manage and rotate certificates. In addition, your development team has to integrate TLS certificates to each service.

Service meshes enable your team to save time by encrypting all network traffic. A service mesh inserts a TLS sidecar into each Kubernetes pod. Service mesh control planes can rotate TLS certificates for you.

Service Latency Tracing

Plain vanilla Kubernetes cluster troubleshooting does not always give you the root of the problem. For instance, in latency problems, you have to analyze data from a single service. However, this data may not be related to the communication with external services. The problem can be in the query or in the front-end application.

To solve this issue you have to monitor the performance of your code, analyze errors, and trace each service request in your app. Service mesh platforms such as Istio provide built-in distributed tracing and eliminate the need to instrument your code.

A service mesh uses a proxy sidecar to route traffic through egress and ingress. The sidecar then adds headers for request tracing. As a result, you will get the trace for all requests without the need to analyze your code.

Limited Load Balancing

Plain Kubernetes does not provide an option to easily pinpoint traffic bottlenecks and scale the front-end when it needs more traffic. On the other hand, a service mesh provides built-in metrics that you can leverage for more advanced load balancing.

Different Kubernetes Service Mesh Implementations

The list below reviews three Kubernetes service mesh products available today. The list highlights the important differences between these service mesh options.

Istio

Istio is an open-source service mesh designed to manage multiple service proxies. Istio was developed by Google, IBM, and Lyft to target only Kubernetes deployments. It was later redesigned to target all microservice platforms. Istio integrates with the Envoy proxy by default. The main focuses of Istio are scalability, performance, portability, and keeping loosely coupled components for flexibility.

The control plane of Istio is written in Go. Operators use the control plane to compose different management policies. Each control plane component is designed to work with different applications. As a result, Istio can pair with different underlying data planes.

Main Istio features include:

  • Security features including RBAC, identity and key management
  • Advanced rate limiting and policy, quota
  • Support for HTTP/1.x, HTTP/2, gRPC, WebSockets, and all TCP traffic
  • Fault injection
  • Multi-platform, hybrid deployment

Linkerd

Linkerd is an open-source service mesh project released in February 2016. Linkerd was the first product in the service mesh family. The platform provides a powerful and feature-rich service mesh design that can run on any environment. Linkerd is based on the Finagle library and written in Scala. It can scale up to manage thousands of requests per second.

The Linkerd package consists of a control plane and a proxying data plane. Linkerd also has a commercial version supported by Buoyant. The current Linkerd version includes the Service Mesh Interface (SMI) traffic API. This API enables you to automate Canary deployments and other advanced delivery approaches.

Main Linkerd features include:

  • Supports different platforms like Kubernetes, Docker, Amazon ECS, DC/OS
  • Unites multiple systems with a built-in service discovery abstractions
  • Support for HTTP/1.x, HTTP/2, gRPC, WebSockets, and all TCP traffic

AWS App Mesh

AWS App Mesh is a service mesh solution that simplifies microservice monitoring and management in AWS. App Mesh enables you to control the communication and network traffic between AWS services like ECS, EKS, EC2. In addition, App Mesh enables you to monitor, trace, and perform microservices logging.

You can deploy the App Mesh data plane within the application. The control plane is managed by Amazon and users can’t access it.

Conclusion

Hopefully, this article helped you understand what service meshes are and how you can use these tools. It also hopefully helped you determine where to start your service mesh journey.

Evaluating service mesh options requires comprehensive research. The options covered here are only three of those available. Before deciding on a solution, make sure to try different options for yourself to see what actually works best in your environment.

Further Reading


The Rise of Service Mesh Architecture 



Service Mesh and Cloud-Native Microservices


Topics:
kubernetes, service mesh

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}