A Service Mesh for Kubernetes (Part 3): Encrypting all the Things

DZone 's Guide to

A Service Mesh for Kubernetes (Part 3): Encrypting all the Things

Now that you've integrated linkerd with Kubernetes, it's important to add security to your communications by encrypting your HTTP calls with TLS.

· Cloud Zone ·
Free Resource

In this article, we’ll show you how to use linkerd as a service mesh to add TLS to all service-to-service HTTP calls, without modifying any application code.

Note: this is one of a series of articles about linkerd, Kubernetes, and service meshes. Other installments in this series include:

  1. Top-line service metrics
  2. Pods are great, until they’re not
  3. Encrypting all the things (this article)
  4. Continuous deployment via traffic shifting
  5. Dogfood environments, ingress, and edge routing
  6. Staging microservices without the tears
  7. Distributed tracing made easy
  8. Linkerd as an ingress controller
  9. gRPC for fun and profit
  10. The service mesh API
  11. Egress
  12. Retry budgets, deadline propagation, and failing gracefully
  13. Autoscaling by top-line metrics

In the first installment in this series, we showed you how you can easily monitor top-line service metrics (success rates, latencies, and request rates) when linkerd is installed as a service mesh. In this article, we’ll show you another benefit of the service mesh approach: it allows you to decouple the application’s protocol from the protocol used on the wire. In other words, the application can speak one protocol, but the bytes that actually go out on the wire are in another.

In the case where no data transformation is required, linkerd can use this decoupling to automatically do protocol upgrades. Examples of the sorts of protocol upgrades that linkerd can do include HTTP/1.x to HTTP/2, thrift to thrift-mux, and, the topic of this article, HTTP to HTTPS.

A Service Mesh for Kubernetes

When linkerd is deployed as a service mesh on Kubernetes, we place a linkerd instance on every host using DaemonSets. For HTTP services, pods can send HTTP traffic to their host-local linkerd by using the http_proxy environment variable. (For non-HTTP traffic the integration is slightly more complex.)

In our blog post from a few months ago, we showed you the basic pattern of using linkerd to “wrap” HTTP calls in TLS by proxying at both ends of the connection, both originating and terminating TLS. However, now that we have the service mesh deployment in place, things are significantly simpler. Encrypting all cross-host communication is largely a matter of providing a TLS certificate to the service mesh.

Let’s walk through an example. The first two steps will be identical to what we did in Part I of this series — we’ll install linkerd as a service mesh and install a simple microservice “hello world” application. If you have already done this, you can skip straight to step 3.

Step 1: Install linkerd

We can install linkerd as a service mesh on our Kubernetes cluster by using this Kubernetes config. This will install linkerd as a DaemonSet (i.e., one instance per host) in the default Kubernetes namespace:

kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd.yml

You can confirm that installation was successful by viewing linkerd’s admin page (note that it may take a few minutes for the ingress IP to become available):

INGRESS_LB=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}")
open http://$INGRESS_LB:9990 # on OS X

Step 2: Install the Sample Apps

Install two services, “hello” and “world”, using this hello-world config. This will install the services into the default namespace:

kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/hello-world.yml

These two services function together to make a highly scalable, “hello world” microservice (where the hello service, naturally, must call the world service to complete its request).

At this point, we actually have a functioning service mesh and an application that makes use of it. You can see the entire setup in action by sending traffic through linkerd’s external IP:

http_proxy=$INGRESS_LB:4140 curl -s http://hello

If everything’s working, you should see the string “Hello world”.

Step 3: Configure linkerd to Use TLS

Now that linkerd is installed, let’s use it to encrypt traffic. We’ll place TLS certificates on each of the hosts, and configure linkerd to use those certificates for TLS.

We’ll use a global certificate (the mesh certificate) that we generate ourselves. Since this certificate is not tied to a public DNS name, we don’t need to use a service like Let’s Encrypt. We can instead generate our own CA certificate and use that to sign our mesh certificate (“self-signing”). We’ll distribute three things to each Kubernetes host: the CA certificate, the mesh key, and the mesh certificate.

The following scripts use sample certificates that we’ve generated. Please don’t use these certificates in production. For instructions on how to generate your own self-signed certificates, see our previous post, where we have instructions on how to generate your own certificates).

Step 4: Deploy Certificates and Config Changes to Kubernetes

We’re ready to update linkerd to encrypt traffic. We will distribute the sample certificates as Kubernetes secrets.

kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/certificates.yml

Now we will configure linkerd to use these certificates by giving it this configuration and restarting it:

kubectl delete ds/l5d configmap/l5d-config
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-tls.yml

Step 5: Success!

At this point, linkerd should be transparently wrapping all communication between these services in TLS. Let’s verify this by running the same command as before:

http_proxy=$INGRESS_LB:4140 curl -s http://hello

If all is well, you should still see the string “Hello world”—but under the hood, communication between the hello and world services is being encrypted. We can verify this by making an HTTPS request directly to port 4141, where linkerd is listening for requests from other linkerd instances:

curl -skH 'l5d-dtab: /svc=>/#/io.l5d.k8s/default/admin/l5d;' https://$INGRESS_LB:4141/admin/ping

Here we’re asking curl to make an HTTPS call, and telling it to skip TLS validation (since curl is expecting a website, not linkerd). We’re also adding a dtab override to route the request to the linkerd instance’s own admin interface. If all is well, you should again see a successful “pong” response. Congratulations! You’ve encrypted your cross-service traffic.


In this post, we’ve shown how a service mesh like linkerd can be used to transparently encrypt all cross-node communication in a Kubernetes cluster. We’re also using TLS to ensure that linkerd instances can verify that they’re talking to other linkerd instances, preventing man-in-the-middle attacks (and misconfiguration!). Of course, the application remains blissfully unaware of any of these changes.

TLS is a complex topic and we’ve glossed over some important security considerations for the purposes of making the demo easy and quick. Please make sure you spend the time to fully understand the steps involved before you try this on your production cluster.

Finally, adding TLS to the communications substrate is just one of many things that can be accomplished with a service mesh. Be sure to check out the rest of the articles in this series for more!

For help with this or anything else about linkerd, feel free to stop by our linkerd community Slack, post a topic on linkerd discourse, or contact us directly!

cloud, encryption, kubernetes, linkerd, service mesh, tutorial

Published at DZone with permission of Alex Leong , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}