Distributed Systems and the Sidecar Pattern
In Part 1 of a multi-part series, a developer demonstrates how to implement the sidecar pattern into a microservice application using Node.js.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
Part 1: Sidecar Pattern
This series will attempt to demonstrate some examples of common, distributed systems, inspired by some of the patterns demonstrated in the book written by Brendan Burns, Designing Distributed Systems.
I will outline some examples and discuss some potential use cases.
This series will utilise Kubernetes and Docker throughout, and will be written in a mix of Node, Python, and, of course, Golang. The level will be aimed at people who are new to distributed systems mostly. Ensure you have Docker installed, and have Kubernetes running locally on your machine (through the Docker for Mac tool, for instance). But you can run it anywhere, so long as you have access to these tools, we're good!
The first pattern we're going to look at is the Sidecar pattern.
Use Case
Let's say, for example, we have several web services, we have a requirement that we want to be able to monitor and log traffic into each of our services. We have a logger library, which we want to be able to use.
Many of the services are old and too brittle to update, so we can't update the original service, plus it would take to long to update each service. We just want to be able to proxy traffic through an application that will perform this functionality for us.
Our Application
Let's start by creating a pseudo legacy app. In this case, it's just a hello world Node.js application, but for the sakes of this example, let's pretend that it's actually some legacy COBOL financial application and that the original team of developers who wrote this application retired 10 years ago and never want to look at a line of code ever again. Tenuous example, I know, but I want to hammer home the point that distributed patterns can be used in patterns which enable us to do some really powerful things.
// sidecar/app/app.js
const express = require('express');
const app = express();
const port = process.env.PORT || 8080;
app.get('/', (req, res) => {
res.send('Hello you!');
});
app.listen(port, () => {
console.log(`Application started on port: ${port}`);
});
It's just a bog standard Express app. It exposes an index route which just returns a 'hello' message.
Now our Dockerfile:
FROM node:10-alpine
COPY . .
CMD ["node", "app.js"]
As basic as it can possibly get, we're using a node:10-alpine
image. I'm using the Alpine version as it's tiny, which is beneficial for production-ready Docker services because this means things boot up quicker, and take up less room on your cluster!
I've created a fake logging library, which doesn't do very much, but let's pretend it has loads of interesting tracing and logging features. We'll use this in our sidecar application:
'use strict';
class Logger {
constructor(service) {
this.service = service;
}
send(target, route, time = Date.now()) {
// pretend this is actually doing something
// more interesting than this...
console.log(`${this.service} - ${target} - ${route} - ${time}`);
}
}
module.exports = Logger;
In real-world use cases, this could be a Prometheus integration, or Zipkin, or Jaeger.
Now let's create our sidecar application:
const express = require('express');
const app = express();
const httpProxy = require('http-proxy');
const apiProxy = httpProxy.createProxyServer();
const SRV_NAME = process.env.SRV_NAME;
const Logger = require('./lib/really-cool-logger');
const logger = new Logger(SRV_NAME);
const {
TARGET = 'http://localhost:8080',
PORT = 80
} = process.env;
app.all('/*', (req, res) => {
logger.send(TARGET, req.url, Date.now());
apiProxy.web(req, res, { target: TARGET });
});
app.listen(PORT);
Pretty straight-forward, yet another Express server, this time we use a library called http-proxy
, which does exactly what it says on the tin. We create a catch-all API endpoint and direct the traffic to our apiProxy
. Using an environment variable as our target server (in our case, it'll be our legacy app). Oh, you'll also notice our mega interesting logger is called before we proxy our request through to our legacy app.
We use a Docker image the same as the last for this application, and we're good to go. I've added a Makefile to build our images in the root:
build-images:
docker build -t sidecar-app:v1 app
docker build -t sidecar-sidecar:v1 sidecar
Now we can build our images with $ make build-images
.
Finally, let's get to the cool part! Let's create our Kubernetes deployment and service config.
First of all, our deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sidecar-service
labels:
app: application
spec:
replicas: 2
selector:
matchLabels:
app: sidecar-service
template:
metadata:
labels:
app: sidecar-service
spec:
containers:
- name: sidecar-app
image: sidecar-app:v1
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: port
value: "8080"
- name: sidecar
image: sidecar-sidecar:v1
imagePullPolicy: Never
ports:
- containerPort: 80
env:
- name: SRV_NAME
value: "legacy-application"
- name: PORT
value: "80"
- name: TARGET
value: "http://localhost:8080"
We've created a Kubernetes deployment, with two Docker images. The first being our legacy application, the second, our sidecar application. We use environment variables in order to configure the sidecar to set the target. Our legacy application is running on localhost:8080
. Thus we point our sidecar to that location, as both of our containers are running in the same pod.
kind: Service
apiVersion: v1
metadata:
name: sidecar-service
spec:
selector:
app: sidecar-service
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Now let's create a service. A service is used to create various network level configurations, in our case, we're creating a load-balancer, which points at our application pod at port 80. Our sidecar application is running on port 80, therefore traffic gets routed through our load balancer, to our sidecar, and finally to our legacy application.
Let's fire up our deployment, and our service: $ kubectl create -f ./deployment.yml,./service.yml
.
Check the status our your deployment: $ kubectl get pods
. You should see something like:
Now make a few requests to http://localhost
, you should see our 'hello' message. You should now also be able to check the logs, using: $ kubectl logs <pod-name> sidecar
.
You should be able to see something similar to:
This above is the output from our logging service.
Conclusion
This is a very simple, single-node pattern, but it's really useful. Some other good use cases are things such as tracing, security, such as enforcing SSL to legacy applications, services meshes, such as Istio.
Published at DZone with permission of Ewan Valentine, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments