Docker Stats Monitoring: Taking Dockbeat for a Ride
Docker Stats Monitoring: Taking Dockbeat for a Ride
Read on to learn more about the latest addition to Elastic’s family of beats that help with integrating Docker logging with ELK.
Join the DZone community and get the full member experience.Join For Free
With the influx of DevOps-related products and services on the market, today’s application delivery toolchain has become complex and fragmented. Watch Avoiding the DevOps Tax to learn best practices for integration and automation to realize a faster DevOps lifecycle.
There is no silver bullet. This is how I always answer those asking about the best logging solution for Docker. (A bit pessimistic of me, I know.) But what I’m implying with that statement is that there is no perfect method for gaining visibility into containers. Dockerized environments are distributed, dynamic, and multi-layered in nature, so they are extremely difficult to log.
That’s not to say that there are no solutions — on the contrary. From Docker’s logging drivers to logging containers to using data volumes, there are plenty of ways to log Docker, but all have certain limitations or pitfalls. Logz.io users use a dedicated container that acts as a log collector by pulling Docker daemon events, stats, and logs into our ELK Stack (Elasticsearch, Logstash, and Kibana).
That’s why I was curious to hear about the first release of Dockbeat (called Dockerbeat prior to Docker’s new repo naming conventions) — the latest addition to Elastic’s family of beats, which is a group of different log collectors developed for different environments and purposes. Dockbeat was contributed by the ELK community and is focused on using the docker stats API to push container resource usage metrics such as memory, IO, and CPU to either Elasticsearch or Logstash.
Below is a short review of how to get Dockbeat up and running as well as a few personal first impressions by me. My environment was a locally installed ELK Stack and Docker on Ubuntu 14.04.
To get Dockbeat up and running, you can either build the project yourself or use the binary release on the GitHub repository. The former requires some additional setup steps (installing Go and Glide, for starters), and I eventually opted for the latter. It took just a few steps and proved to be pretty painless (an additional method is to run Dockbeat as a container — see the repo’s readme for more details).
You will first need to download the source code and release package from this link.
$git clonehttps://github.com/Ingensi/dockbeat.git $wget https://github.com/Ingensi/dockbeat/releases/download/v1.0.0/dockbeat-v1.0.0-x86_64
Configuring and Running Dockbeat
Before you start Dockbeat, there is the matter of configurations. Since I used a vanilla installation with Docker and ELK installed locally, I did not need to change a thing in the supplied dockbeat.yml file.
Dockbeat is configured to connect to the default Docker socket:
My local Elasticsearch was already defined in the output section:
### Elasticsearch as output elasticsearch hosts:[“localhost:9200”]
Of course, if you’re using a remotely installed Elasticsearch or Logstash instance, you will need to change these configurations respectively.
Before you start Dockbeat, you will need to grant execution permissions to the binary file:
Then, to start Dockbeat, use the following run command:
Please note: I used the two optional parameters (‘-v’, ‘-e’) to see the output of the run command, but these, of course, are not mandatory.
Dockbeat then runs, and if all goes as expected, you should see the following lines in the debug output:
2016/09/1809:13:40.851229beat.go:173:INFO dockbeat successfully setup.Start running. 2016/09/1809:13:40.851278dockbeat.go:196:INFO dockbeat%!(EXTRA string=dockbeat isrunning!Hit CTRL-Ctostop it.) 2016/09/1809:14:47.101231dockbeat.go:320:INFO dockbeat%!(EXTRA string=Publishing%vevents,int=5) ...
It seems like all is working as expected, so my next step is to ping Elasticsearch:
The output displayed displays a cross-section of Elasticsearch indices:
yellow open dockbeat-2016.09.18517490773.7kb773.7kb yellow open.kibana 11 10 3.1kb 3.1kb
The next step is to define the index pattern in Kibana. After clicking on the Setting tab in Kibana, I entered dockbeat.* in the index name/pattern field and selected the @timestamp filter to create the new index pattern:
Now, all the metrics collected by Dockbeat and stored by Elasticsearch are listed in the Visualize tab in Kibana:
Analyzing Container Statistics
The metrics collected by Dockbeat via the docker stats API are pretty extensive and include container attributes, CPU usage, network statistics, memory statistics, and IO access statistics.
To gain some more visibility into the log messages, I added some of the fields from the menu shown on the left in the above screenshot. I started with the type and containerName fields and then explored some of the other indexed fields.
Querying options in Kibana are varied. You can start with a free-text search for a specific string or use a field-level search. Field-level searches allow you to search for specific values within a given field with the following search syntax.
To search for logs for a specific container, I entered the following query in Kibana:
Free-text search is the simplest query method, but because we are analyzing metrics, it is not the best way to go about analyzing the statistics unless you are looking for a specific container name.
Visualizing Container Stats
Visualizations are one of the biggest advantages of working with ELK, and the sky’s the limit as far as the number of container log visualizations is concerned. You can slice and dice the stats any way you like — it all boils down to what data you want to see.
Building visualizations in Kibana does require a certain amount of expertise, but the result is worthwhile. You can end up with a nice monitoring dashboard for your Docker containers.
Here are a few examples.
Number of Containers
Sometimes, it’s easy to lose track of the number of containers that we have running. In a production environment, this number can easily reach twenty per host or more. To see a unique count of the running containers, I used the Metric visualization to display a count of the containerName field:
Average CPU, Memory, and Network Over Time
Another example is to create a line chart that visualizes the average resource consumption per container over time. The configuration below is for network stats, but the same configuration can be applied to all types of metrics. All you have to do is change the field that is used to aggregate the Y axis in the chart.
The resulting line chart:
Compiling these visualizations into one dashboard is easy — it’s simply a matter of selecting the Dashboard tab and then manually adding the visualizations.
The Bottom Line
Dockbeat was easy to install and get up and running — it worked right out of the box in my local sandbox environment and required no extra configurations! If you’re looking for a lightweight monitoring tool, Dockbeat is a good way to start.
As I said in the introduction, there is no perfect logging solution for Docker. Containers produce other useful output information including Docker daemon events (such as attach, commit, and copy) and Docker logs (where available), and these are not collected by Dockbeat (even though they are necessary to get a more holistic monitoring view of a Dockerized environment).
Looking ahead, this seems like the logical next addition for this community beat.
Published at DZone with permission of Daniel Berman , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.