Over a million developers have joined DZone.

Routing Data from Docker to Prometheus Server via Fluentd

Learn how to route container data to Prometheus with Fluentd.

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

Possibly the best way to build an economy of scale around your framework, whatever it is, is to build up your library of integrations – or integrators – and see what and who your new partners can bring into the mix.

In this blog, we’ll trace the steps to connect Fluentd to a Docker container to route stdout commands (our data) to Prometheus. (Prometheus could be similarly configured on Google Cloud Platform, CoreOS or even Kubernetes). Later, we’ll also query Prometheus for that data.

When Treasure Data joined The Cloud Native Computing Foundation (CNCF), not only did it reinforce its commitment to drive Fluentd towards mainstream use as a logging framework, it also renewed its existing commitment to using Fluentd as an integration point between cloud-native software like KubernetesPrometheus, and Docker.


Originally started at Soundcloud around 2013 by an engineer taking a break from Google, Prometheus was a result of frustration that other monitoring tools (and time-series database integrations) weren’t quite up to snuff.

While monitoring is essential to any IT organization; once these orgs were creating microservice-style applications and distributing them across literally thousands of bare-metal or virtualized server instances (or even more containers), other tools were found to be insufficient to handle, among other things, the incrementalism and scalability of this approach. Thus, even Facebook’s own Ganglia and Google’s own Nagios were coming up short.

Soundcloud, a Berlin, Germany – based audio streaming service, was also having their own issues with StatsD and Graphite monitoring tools when Google’s Matt Proud joined to build up the Prometheus project. Added by Google to the Kubernetes project in May of 2015 (after its coming out party that prior January), Matt Proud started the Prometheus project ‘to apply empirical rigor to large-scale industrial experimentation’, among other things.

So What Is Prometheus?

Prometheus is an open-source monitoring system and time-series database. Written in Go language, Prometheus is a natural member of the ecosystem around CNCF, (and is officially being incubated there), due in parts to its design toward scalability and extensibility: Prometheus is not just for monitoring Kubernetes applications; it also works for those in Mesos, Docker, OpenStack and other things.

Primarily a monitoring tool, Prometheus includes a time-series database and a query system. However, it was designed to be extended with a larger datastore as needed. Given that it supports a range of other datastores to this end (including Cassandra, Riak, Google Big Table and AWS DynamoDB, among others), it’s no surprise that current Prometheus integrations include Kubernetes, CoreOS (via a Kubernetes stack called Tectonic), Docker and a range of other tools, VMs and container technologies. Digital Ocean, Boxever, KPMG, Outbrain, Ericsson, ShowMax and the Financial Times are all using Prometheus.

So what does an integration look like? Let’s dig in:


So, Why Would You Want to Do It This Way?

It’s already possible to monitor a Docker service directly using Prometheus. So why add Fluentd in the middle? Well, what if you later decide to scale, and you want to monitor aggregate metrics from multiple containers? Or what if you want to route your Docker data to multiple destinations (and not just Prometheus)?

Configuring the Fluentd Input Plugin for Docker

The first thing you’ll want to do is get Fluentd installed on your host.

Once that’s done, and Fluentd is running (and can be stopped and started it’s time to install the plugin.

Add this line to your application’s Gemfile:

$ gem 'fluent-plugin-prometheus'

view rawgistfile1.txt hosted with ❤ by GitHub

And then execute:

$ bundle

view rawgistfile1.txt hosted with ❤ by GitHub

Or install it yourself as:

$ td-agent-gem install fluent-plugin-prometheus

view rawgistfile1.txt hosted with ❤ by GitHub

NOTE! you’ll need to be running Ruby >= v.2.0 for this plugin to install properly. We recommend using RVM to get the proper Ruby version installed.

Setting up Prometheus on a Docker Host

Once you have a Docker host up and running, you should install the precompiled Prometheus image using wget as follows:

$ wget https://github.com/prometheus/prometheus/releases/download/0.16.1/prometheus-0.16.1.linux-amd64.tar.gz -O - | tar zxf -

view rawgistfile1.txt hosted with ❤ by GitHub

And then start Prometheus server up:

./prometheus-0.16.1.linux-amd64/prometheus -config.file=/opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-prometheus-0.1.3/misc/prometheus.yaml -storage.local.path=./prometheus/metrics

view rawgistfile1.txt hosted with ❤ by GitHub

Incidentally, you should be running Prometheus against the prometheus.yml that got installed when you installed fluent-plugin-prometheus. It looks like this:

# A job to scrape an endpoint of Fluentd running on localhost.
- job_name: fluentd
scrape_interval: 5s
- targets:
- 'localhost:24231'
metrics_path: /metrics

view rawprometheus.yml hosted with ❤ by GitHub

You can easily test if your Prometheus server is up and running, by opening the URLs exposed by Prometheus from a browser on another host.


This should show a page containing a lot of text-only results containing different performance metrics for Prometheus service.

You can also try:


Pay attention to this, as we’ll use this later to query our Prometheus server given our directives.

Routing the Data to a Prometheus Instance

This is a matter of configuring your fluentd.conf or td-agent.conf with the appropriate directives to route the data correctly to Prometheus.

For our example today, you’ll want to edit it to do the following:

  1. Get all stdout commands entered within the Docker container.
  2. Route these – and send them – to your Prometheus server.
  3. Increment your docker_command_log metric as more commands are entered into your container.

First, open your td-agent.conf in a text editor:

$ sudo nano /etc/td-agent/td-agent.conf

view rawgistfile1.txt hosted with ❤ by GitHub

Now, let’s look at the directives:

@type forward
@type prometheus
<filter docker.**>
@type prometheus
name docker_command_log
type counter
desc total docker commands
#key log
<match docker.**>
@type copy
@type stdout

view rawtd-agent.conf hosted with ❤ by GitHub

These settings ensure that, as we are collecting console commands from our Docker container, we’re routing them to our Prometheus server, as a metric. The metric will be incremented as more Docker container commands are logged.

Once done, restart your Fluentd instance to take your new settings into account.

$ sudo service td-agent restart

view rawgistfile1.txt hosted with ❤ by GitHub

Next, start your Docker container, from which you will be logging console commands:

$ sudo docker run -ti --name test --log-driver=fluentd ubuntu /bin/bash

view rawgistfile1.txt hosted with ❤ by GitHub

Finally, from within your Docker container, start entering commands. You can verify that Fluentd is picking up the commands by tailing td-agent.log in a separate window to verify the commands are working:

# tail -f /var/log/td-agent/td-agent.log

view rawgistfile1.txt hosted with ❤ by GitHub


Querying Your Prometheus Instance

Last, from our browser, we’ll query our Prometheus instance for the data we sent it from our Docker container.


You should see a web UI like the one shown here:


Enter the string docker_command_log in the expression editor, and click enter.


If everything is working, the expression editor should auto-complete your docker_command_logexpression.

You should also see the metric in the graph populated with the number of commands you’d entered to the Docker container.

Entering more commands in your container and refreshing the browser should increment this number.

Next Steps

  • You can learn more about Prometheus here: www.prometheus.io
  • Would you like to build the easiest possible logging infrastructure you can? Get Fluentd!
  • There are more than two hundred input, output, and other plugins here. Here you can see them sorted in descending order by popularity here: fluentd.org/plugins/all
  • If you are interested in seeing the plug-ins by category, go here: fluentd.org/plugins

Last but not least, get Treasure Data. You can always ask if you need any help!

A great big shoutout goes to Muga Nishizawa and Sri Ramana for getting me unstuck at various times while preparing this tutorial. Thanks guys!

Hortonworks DataFlow is an integrated platform that makes data ingestion fast, easy, and secure. Download the white paper now.  Brought to you in partnership with Hortonworks

prometheus ,fluentd ,docker ,containers ,logs

Published at DZone with permission of John Hammink. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}