Docker Swarm Logging With ELK and the Logz.io Log Collector
If your team is looking for a great way to monitor how your Docker containers are performing, read on to learn how to use these helpful tools.
Join the DZone community and get the full member experience.Join For Free
If you’re running containers at scale, you are most likely either already using a container orchestration tool or are in the process of deliberating over which one to use. To be able to build and run hundreds of containers, a management layer on top of your Docker hosts is necessary to be able to orchestrate the launching, scaling, and updating of your containers efficiently.
Docker Swarm is Docker’s built-in orchestration service. Included in the Docker Engine since version 1.12, Docker Swarm allows you to natively manage a cluster of Docker Engines, easily create a “swarm” of Docker hosts, deploy application services to this swarm, and manage the swarm’s behavior.
Logging in Swarm mode is not that different than logging in a “non-swarm” mode — container logs are written to stdout and stderr and can be collected using any of the logging drivers or log routers available. If you’re using the ELK Stack for centralized logging, you can use any of the methods outlined in this article.
The Logz.io Docker Log Collector is a good option to use in Docker Swarm since it allows you to get a comprehensive picture of your swarm by a) providing three layers of information from your Docker nodes — container logs, daemon events and Docker stats from your hosts and b) allowing you to monitor cluster activity and performance using environment variables.
Let’s take a look.
Creating a Docker Swarm Cluster
If you already have Docker Swarm set up with running services, you can skip the following two sections that explain how to setup Docker Swarm and install a demo app.
To create a Docker Swarm, I created three different Docker hosts — one to act as a manager node and the two others as workers.
On the host designated to be the manager, run (replace with the public IP of the host):
sudo docker swarm init --advertise-addr <SERVERIP>
You should get the following output:
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token <TOKEN> \ <SERVERIP>:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
The –advertise-addr flag configures the manager host to publish its address as , so before continuing on verify that the other nodes in the swarm are able to access the manager at this specific IP address.
Next, SSH into your other nodes and use the command supplied above to join the Swarm cluster:
docker swarm join \ --token <TOKEN> \ <SERVERIP>:2377 This node joined a swarm as a worker.
On your manager, enter the following command to see the list of nodes in your cluster:
sudo docker nodes ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS qdf1ipmtijgnti0n8ie1uaeo2 * ip-172-31-53-87 Ready Active Leader t5rineip3z01na6t7s3qwftit ip-172-31-63-228 Ready Active wvqzw4384nyj8yzz0zx1exnkx ip-172-31-60-187 Ready Active
Deploying the Demo App
Now that we have a Docker Swarm set up, we’re going to deploy the sample voting app on it.
To do this, use the following commands:
git clone https://github.com/dockersamples/example-voting-app.git cd example-voting-app.git sudo docker stack deploy --compose-file docker-stack.yml vote
Within a few seconds, you will have multiple services up and running, and replicated as defined in the application’s docker-stack.yml file.
View the services using:
docker service ls ID NAME MODE REPLICAS IMAGE 6j76wdkt63a0 vote_vote replicated 2/2 dockersamples/examplevotingapp_vote:before 78vgc1t221kn vote_db replicated 1/1 postgres:9.4 noj8ujaxsrx1 vote_result replicated 1/1 dockersamples/examplevotingapp_result:before nvek40nqdvgn vote_worker replicated 1/1 dockersamples/examplevotingapp_worker:latest nx5g9ln4uxb0 vote_redis replicated 2/2 redis:alpine pfud4pp24ret vote_visualizer replicated 1/1 dockersamples/visualizer:stable
To take a look at the app, access it using port 5000 on ANY of the cluster nodes.
Results can be seen using port 5001:
Running the Log Collector
Our next step is to run the Docker log collector.
Wrapping docker-loghose and docker-stats, and running as a separate container per Docker host, the log collector fetches logs and monitors stats from your Docker environment and ships them to the Logz.io ELK Stack.
As specified above, the log collector ships the following types of messages:
- Docker container logs — logs produced by the containers themselves, the equivalent of the output of the ‘docker logs’ command).
- Docker events — Docker daemon “admin” actions (e.g., kill, attach, restart, and die).
- Docker stats — monitoring statistics for each of the running Docker containers (e.g., CPU, memory, and network).
Using the -a flag, we can add any number of labels to the data coming in from the containers. In Docker Swarm, we can use this option to add the name or ID of the cluster node.
For example, on the manager node, use the following command (enter your Logz.io token in the place holder):
sudo docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock logzio/logzio-docker -t <TOKEN> -a env=dev -a swarm-node=master
Repeat this command for the other cluster nodes.
For worker 1:
sudo docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock logzio/logzio-docker -t <TOKEN> -a env=dev -a swarm-node=worker1
For worker 2:
sudo docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock logzio/logzio-docker -t <TOKEN> -a env=dev -a swarm-node=worker2
Within a few minutes, you will have data streaming into the Logz.io ELK Stack.
Analyzing the Data
To help you begin analyzing the data being shipped from your Docker Swarm, here are a few tips and tricks.
First, decide which type of data you wish to focus on.
To analyze container logs only, use:
To analyze container stats only, use:
Select some of the fields from the list of available fields on the left. This will give you some visibility into the data being displayed in the main viewing area. For example, add the ‘swarm-node,’ ‘name,’ and ‘env’ fields.
You can focus on logs for a specific node using:
tags:docker-logs AND swarm-node:worker1
Visualizing the Data
Our last and final step in this article is to visualize the data coming from our Swarm nodes. Kibana is renown for its visualization capabilities and the sky’s the limit with what you can do with your data.
Here are a few examples.
Metric visualizations are ideal for showing a single stat. You can use them, for example, to show the number of worker nodes in your Swarm cluster:
Another example is to show a breakdown of logs per Swarm node using a pie chart visualization:
You can show the same data over time using a line chart visualization:
Using the Docker stats data, we can create a series of visualizations analyzing performance and resource consumption of our Swarm nodes. Here is an example of showing memory consumption over time, per node:
Once you have your visualizations lined up, you can add them all up into one comprehensive dashboard for monitoring your Docker Swarm:
While the methodology for logging in a Docker Swarm mode does not differ from logging in a regular Docker environment, analysis and visualization can vary based on node data we decide to ship with the logs. The Logz.io log collector makes this pretty easy to do.
Published at DZone with permission of Daniel Berman, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.