{{announcement.body}}
{{announcement.title}}

Docker Centralized Logging With ELK Stack

DZone 's Guide to

Docker Centralized Logging With ELK Stack

In this guide, you will learn how to deploy ELK and combine ELK with Filebeat aggregate container logs. For this, we are going to build a custom Docker image.

· Cloud Zone ·
Free Resource

As your infrastructure grows, it becomes crucial to have a reliable centralized logging system. Log centralization is becoming a key aspect of a variety of IT tasks and provides you with an overview of your entire system.

The best solution is to aggregate the logs from all containers, which is enriched with metadata so that it provides you with better traceability options and comes with awesome community support. This is where the ELK Stack comes into the picture. ELK, also known as the Elastic stack, is a combination of modern open-source tools like ElasticSearch, Logstash, and Kibana. It is a complete end-to-end log analysis solution you can use for your system.

Each component has its defined role to play: ElasticSearch is best in storing the raw logs, Logstash helps to collect and transform the logs into a consistent format, and Kibana adds a great visualization layer and helps you to manage your system in a user-friendly manner.

In this guide, you will learn how to deploy ELK and start aggregating container logs. Here we are going to combine ELK with Filebeat to aggregate the container logs. For this, we are going to build a custom Docker image.

Step 1 - Configuring Filebeat

Let’s begin with the Filebeat configuration. First, you have to create a Dockerfile to create an image:

Shell


Now, open the Dockerfile in your preferred text editor, and copy/paste below mentioned lines:

Dockerfile


In filebeat_docker directory, create a filebeat.yml file that contains configuration for Filebeat. For this guide, we are going to use a minimal filebeat.yml file.

YAML


Now, it’s time to create the Filebeat Docker image:

Shell


To verify if the image was built successfully:

Shell


For filebeat_elk container, you have created two mounts using the parameter -v;

  • /var/lib/docker/containers:/usr/share/dockerlogs/data: You have mapped host machine docker logs which resides in /var/lib/docker/containers to /usr/share/dockerlogs/data inside the Docker container. Note that you have used :ro which denotes that has read-only permission.
  • Whereas, /var/run/docker.sock is bound with Filebeat container’s Docker daemon, which allows Filebeat container to gather the Docker’s metadata and container logs entries.

Filebeat Installation via DEB

There is an alternate way to install Filebeat in your host machine. At the time of writing, Filebeat version is 7.5.1 you can download the latest version of Filebeat from here.

To install the downloaded .deb file:

Shell


You can find the configuration file in /etc/filebeat/filebeat.yml directory.

Step 2 - Configuring ELK

You can either use a remote server to host your ELK stack or can launch containers within your existing system.

Before you get going, make sure that the following ports are listening:

  • Elasticsearch - Port 9200 and Port 9300
  • Logstash - Port 5044
  • Kibana - Port 5601

ElasticSearch

We are going to use the latest official image of Elasticsearch as of now. So begin by pulling the image from Docker Hub:

Shell


Now, create a directory name as docker_elk, where all your configuration files and Dockerfile will reside:

$ mkdir docker_elk && cd $_

Inside docker_elk, create another directory for elasticsearch and create a Dockerfile and elasticsearch.yml files:

Shell


Open elasticsearch.yml file in your preferred text editor and copy the configuration setting as it is:

YAML


Note that you can set xpack.license.self_generated.type from basic to trial if you wish to evaluate the commercial feature of x-pack for 30 days.

Open Dockerfile in your preferred text editor and copy the below-mentioned lines and paste it as it is:

Dockerfile


The command chown is to change the file owner to elasticsearch as of other files in container.

Kibana

Now, you are going to setup Dockerfile for Kibana, and again you have to pull the latest image from the Elastic Docker registry:

Dockerfile


Inside your docker_elk, create a directory, and inside of it, you have to create a Dockerfile and kibana.yml files:

Shell


kibana.yml will consist of follow configurations. Note that you have to change the values of elasticsearch.user and elasticsearch.password:

YAML


Whereas, in Dockerfile, will look something like this:

Dockerfile


Logstash

The container image for Logstash is available from the Elastic Docker registry. Again at the time of writing current version is 7.5.1, you can find latest version of Logstash here.

Shell


Now, create a directory for Logstash inside docker_elk and add necessary files as shown below:

Shell


Copy below mentioned line into logstash.yml. Make sure that you enter the right username and password in xpack.monitoring.elasticsearch.username and xpack.monitoring.elasticsearch.password respectively:

YAML


Now, add following lines into your Dockerfile:

Dockerfile


Apart from this, you have to create a logstash.conf file. Here in elasticsearch reference you will find host, user and password, make sure you change the values as per your system:

Plain Text


As you are through with the setup of your stack's components, the directory structure of your project should should look something like this:

Plain Text


Now, it’s time to create a Docker Compose file, which will let you run the stack.

Step 3 - Docker Compose

Create a docker-compose.yml file in the docker_elk directory. Here you are going to define and run your multi-container application consist of Elasticsearch, Kibana, and Logstash.

You can copy the below-mentioned context in your docker-compose.yml file. Please make sure that you change the ELASTIC_PASSWORD and ES_JAVA_OPTS values. For this guide, ES_JAVA_OPTS is set to 256 MB, but in real world scenarios you might want to increase the heap size as per requirement.

YAML


Now, to build the ELK stack, you have to run the following command in your docker_elk directory:

Shell


To ensure that the pipeline is working all fine, run the following command to see the Elasticsearch indices:

Shell


Now, it is time to pay a visit to our Kibana dashboard. Open your browser and enter the URL http://your-ip-addr-here:5601. Now enter the predefined username and password; in our case, it is elastic and yourstrongpasswordhere, respectively.

In your Kibana dashboard, go to the Management tab, and under Kibana, click on Index Patterns. In the first row, you will find the filebeat-* index, which already has been identified by Kibana.

Now, go to the Discover tag on the Kibana dashboard and view your container logs along with the metadata under the selected index pattern, which could look something like this:

Conclusion

You have now installed and configured the ELK Stack on your host machine, which is going to collect the raw log from your Docker into the stack that later can be analyzed or can be used to debug applications.

Topics:
cloud native, docker, elk, file, serverless

Published at DZone with permission of Sudip Sengupta . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}