Service Mesh and Cloud-Native Microservices
With Apache Kafka, Kubernetes and Envoy, Istio, Linkerd.
Join the DZone community and get the full member experience.Join For Free
you may also like: istio service mesh, the step-by-step guide, part 1: theory
microservices need to be decoupled, flexible, operationally transparent, data-aware and elastic. most material from last year only discusses point-to-point architectures with tightly coupled and non-scalable technologies like rest / http. this blog post takes a look at cutting edge technologies like apache kafka, kubernetes, envoy, linkerd and istio to implement a cloud-native service mesh to solve these challenges and bring microservices to the next level of scale, speed, and efficiency .
here are the key requirements for building a scalable, reliable, robust and observable microservice architecture:
before we go into more detail, let's take a look at the key takeaways first:
- apache kafka decouples services , including event streams and request-response.
- kubernetes provides a cloud-native infrastructure for the kafka ecosystem.
- service mesh helps with security and observability at the ecosystem/organization scale.
- envoy and istio sit in the layer above kafka and are orthogonal to the goals kafka addresses
the following sections cover some more thoughts about this. the end of the blog post contains a slide deck and video recording to get some more detailed explanations.
microservices, service mesh and apache kafka
apache kafka became the de facto standard for microservice architectures . it goes far beyond reliable and scalable high-volume messaging. the distributed storage allows high availability and real decoupling between the independent microservices. in addition, you can leverage kafka connect for integration and the kafka streams api for building lightweight stream processing microservices in autonomous teams.
a service mesh complements the architecture . it describes the network of microservices that make up such applications and the interactions between them. its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. a service mesh also often has more complex operational requirements, like a/b testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
i explore the problem of distributed microservices communication and how both apache kafka and service mesh solutions address it. this blog post takes a look at some approaches for combining both to build a reliable and scalable microservice architecture with decoupled and secure microservices.
discussions and architectures include various open-source technologies like apache kafka, kafka connect, kubernetes, haproxy, envoy, linkerd and istio.
learn more about decoupling microservices with kafka in this related blog post about " microservices, apache kafka, and domain-driven design (ddd) ".
cloud-native kafka with kubernetes
cloud-native infrastructures are scalable, flexible, agile, elastic and automated. kubernetes got the de factor standard. the deployment of stateless services is pretty easy and straightforward. though, deploying stateful and distributed applications like apache kafka is much harder. a lot of human operations are required. kubernetes does not automatically solve kafka-specific challenges like rolling upgrades, security configuration or data balancing between brokers. the kafka operator — implemented in k8s custom resource definitions (crd) — can help here!
the operator pattern for kubernetes aims to capture the key aim of a human operator who is managing a service or set of services. human operators who look after specific applications and services have deep knowledge of how the system ought to behave, how to deploy it, and how to react if there are problems.
people who run workloads on kubernetes often like to use automation to take care of repeatable tasks. the operator pattern captures how you can write code to automate a task beyond what kubernetes itself provides .
service mesh with kubernetes-based technologies like envoy, linkerd or istio
service mesh is a microservice pattern to move visibility, reliability, and security primitives for service-to-service communication into the infrastructure layer, out of the application layer.
a great, detailed explanation of the design pattern "service mesh" can be found here , including the following diagram which shows the relation between a control plane and the microservices with proxy sidecars:
you can find much more great content about service mesh concepts and its implementations from the creators of frameworks like envoy or linkerd . check out these two links or just use google for more information about the competing alternatives and their trade-offs.
(potential) features for apache kafka and service mesh
an event streaming platform like apache kafka and a service mesh on top of kubernetes are cloud-native, orthogonal and complementary . together they solve the key requirements for building a scalable, reliable, robust and observable microservice architecture:
companies use kafka together with service mesh implementations like envoy, linkerd or istio already today . you can easily combine them to add security, enforce rate limiting, or implement other related use cases. banzai cloud published one of the most interesting architectures: they use istio for adding security to kafka brokers and zookeeper via proxies using envoy .
however, in the meantime, the support gets even better: the pull request for kafka support in envoy was merged in may 2019. this means you now have native kafka protocol support in envoy . the very interesting discussions about its challenges and potential features of implementing a kafka protocol filter are also worth reading.
with native kafka protocol support, you can do many more interesting things beyond l4 tcp filtering. here are just some ideas (partly from above github discussion) of what you could do with l7 kafka protocol support in a service mesh:
protocol conversion from http / grpc to kafka
- tap feature to dump to a kafka stream.
- protocol parsing for observability (stats, logging, and trace linking with http rpcs).
- shadow requests to a kafka stream instead of http / grpc shadow.
- integrate with kafka connect and its whole ecosystem of connectors.
- dynamic routing.
- rate limiting at both the l4 connection and l7 message level.
- filter, add compression.
- automatic topic name conversion (e.g. for canary release or blue/green deployment).
monitoring and tracing
- request logs and stats.
- data lineage/audit log.
- audit log by taking request logs and enriching them with the user info.
- client-specific metrics (byte rate per client id / per consumer groups, versions of the client libraries, consumer lag monitoring for the entire data center).
- ssl termination.
- mutual tls (mtls).
validation of events
- serialization format (json, avro, protobuf, etc.).
- message schema.
- headers, attributes, etc.
that's awesome, isn't it?
Published at DZone with permission of Kai Wähner, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.