Docker Centralized Logging With ELK Stack
Docker Centralized Logging With ELK Stack
In this guide, you will learn how to deploy ELK and combine ELK with Filebeat aggregate container logs. For this, we are going to build a custom Docker image.
Join the DZone community and get the full member experience.Join For Free
As your infrastructure grows, it becomes crucial to have a reliable centralized logging system. Log centralization is becoming a key aspect of a variety of IT tasks and provides you with an overview of your entire system.
The best solution is to aggregate the logs from all containers, which is enriched with metadata so that it provides you with better traceability options and comes with awesome community support. This is where the ELK Stack comes into the picture. ELK, also known as the Elastic stack, is a combination of modern open-source tools like ElasticSearch, Logstash, and Kibana. It is a complete end-to-end log analysis solution you can use for your system.
Each component has its defined role to play: ElasticSearch is best in storing the raw logs, Logstash helps to collect and transform the logs into a consistent format, and Kibana adds a great visualization layer and helps you to manage your system in a user-friendly manner.
In this guide, you will learn how to deploy ELK and start aggregating container logs. Here we are going to combine ELK with Filebeat to aggregate the container logs. For this, we are going to build a custom Docker image.
Step 1 - Configuring Filebeat
Let’s begin with the Filebeat configuration. First, you have to create a Dockerfile to create an image:
Now, open the
Dockerfile in your preferred text editor, and copy/paste below mentioned lines:
filebeat_docker directory, create a
filebeat.yml file that contains configuration for Filebeat. For this guide, we are going to use a minimal
Now, it’s time to create the Filebeat Docker image:
To verify if the image was built successfully:
filebeat_elk container, you have created two mounts using the parameter
/var/lib/docker/containers:/usr/share/dockerlogs/data: You have mapped host machine docker logs which resides in
/usr/share/dockerlogs/datainside the Docker container. Note that you have used
:rowhich denotes that has read-only permission.
/var/run/docker.sockis bound with Filebeat container’s Docker daemon, which allows Filebeat container to gather the Docker’s metadata and container logs entries.
Filebeat Installation via DEB
There is an alternate way to install Filebeat in your host machine. At the time of writing, Filebeat version is
7.5.1 you can download the latest version of Filebeat from here.
To install the downloaded
You can find the configuration file in
Step 2 - Configuring ELK
You can either use a remote server to host your ELK stack or can launch containers within your existing system.
Before you get going, make sure that the following ports are listening:
- Elasticsearch - Port
- Logstash - Port
- Kibana - Port
We are going to use the latest official image of Elasticsearch as of now. So begin by pulling the image from Docker Hub:
Now, create a directory name as
docker_elk, where all your configuration files and Dockerfile will reside:
$ mkdir docker_elk && cd $_
docker_elk, create another directory for
elasticsearch and create a
elasticsearch.yml file in your preferred text editor and copy the configuration setting as it is:
Note that you can set
trial if you wish to evaluate the commercial feature of x-pack for 30 days.
Open Dockerfile in your preferred text editor and copy the below-mentioned lines and paste it as it is:
chown is to change the file owner to
elasticsearch as of other files in container.
Now, you are going to setup Dockerfile for Kibana, and again you have to pull the latest image from the Elastic Docker registry:
docker_elk, create a directory, and inside of it, you have to create a
kibana.yml will consist of follow configurations. Note that you have to change the values of
Dockerfile, will look something like this:
The container image for Logstash is available from the Elastic Docker registry. Again at the time of writing current version is 7.5.1, you can find latest version of Logstash here.
Now, create a directory for Logstash inside
docker_elk and add necessary files as shown below:
Copy below mentioned line into
logstash.yml. Make sure that you enter the right username and password in
Now, add following lines into your
Apart from this, you have to create a
logstash.conf file. Here in elasticsearch reference you will find
password, make sure you change the values as per your system:
As you are through with the setup of your stack's components, the directory structure of your project should should look something like this:
Now, it’s time to create a Docker Compose file, which will let you run the stack.
Step 3 - Docker Compose
docker-compose.yml file in the
docker_elk directory. Here you are going to define and run your multi-container application consist of Elasticsearch, Kibana, and Logstash.
You can copy the below-mentioned context in your docker-compose.yml file. Please make sure that you change the
ES_JAVA_OPTS values. For this guide,
ES_JAVA_OPTS is set to 256 MB, but in real world scenarios you might want to increase the heap size as per requirement.
Now, to build the ELK stack, you have to run the following command in your
To ensure that the pipeline is working all fine, run the following command to see the Elasticsearch indices:
Now, it is time to pay a visit to our Kibana dashboard. Open your browser and enter the URL
http://your-ip-addr-here:5601. Now enter the predefined username and password; in our case, it is
In your Kibana dashboard, go to the Management tab, and under Kibana, click on Index Patterns. In the first row, you will find the
filebeat-* index, which already has been identified by Kibana.
Now, go to the Discover tag on the Kibana dashboard and view your container logs along with the metadata under the selected index pattern, which could look something like this:
You have now installed and configured the ELK Stack on your host machine, which is going to collect the raw log from your Docker into the stack that later can be analyzed or can be used to debug applications.
Published at DZone with permission of Sudip Sengupta . See the original article here.
Opinions expressed by DZone contributors are their own.