DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • 3 Easy Steps for a (Dev)Containerized Microservice With Jolie and Docker
  • EFK Stack on Kubernetes (Part 1)
  • Java High Availability With WildFly on Kubernetes
  • Building a Cost-Effective ELK Stack for Centralized Logging

Trending

  • Memory Leak Due to Time-Taking finalize() Method
  • Introduction to Retrieval Augmented Generation (RAG)
  • Proactive Security in Distributed Systems: A Developer’s Approach
  • Software Delivery at Scale: Centralized Jenkins Pipeline for Optimal Efficiency
  1. DZone
  2. Coding
  3. Tools
  4. Docker Centralized Logging With ELK Stack

Docker Centralized Logging With ELK Stack

By 
Sudip Sengupta user avatar
Sudip Sengupta
DZone Core CORE ·
Jul. 29, 20 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
9.2K Views

Join the DZone community and get the full member experience.

Join For Free

As your infrastructure grows, it becomes crucial to have a reliable centralized logging system. Log centralization is becoming a key aspect of a variety of IT tasks and provides you with an overview of your entire system.

The best solution is to aggregate the logs from all containers, which is enriched with metadata so that it provides you with better traceability options and comes with awesome community support. This is where the ELK Stack comes into the picture. ELK, also known as the Elastic stack, is a combination of modern open-source tools like ElasticSearch, Logstash, and Kibana. It is a complete end-to-end log analysis solution you can use for your system.

Each component has its defined role to play: ElasticSearch is best in storing the raw logs, Logstash helps to collect and transform the logs into a consistent format, and Kibana adds a great visualization layer and helps you to manage your system in a user-friendly manner.

In this guide, you will learn how to deploy ELK and start aggregating container logs. Here we are going to combine ELK with Filebeat to aggregate the container logs. For this, we are going to build a custom Docker image.

Step 1 - Configuring Filebeat

Let’s begin with the Filebeat configuration. First, you have to create a Dockerfile to create an image:

Shell
xxxxxxxxxx
1
 
1
$ mkdir filebeat_docker && cd $_ 
2
$ touch Dockerfile && nano Dockerfile


Now, open the Dockerfile in your preferred text editor, and copy/paste below mentioned lines:

Dockerfile
xxxxxxxxxx
1
 
1
FROM docker.elastic.co/beats/filebeat:7.5.1 
2

          
3
COPY filebeat.yml /usr/share/filebeat/filebeat.yml 
4
USER root 
5
RUN mkdir /usr/share/filebeat/dockerlogs 
6
RUN chown -R root /usr/share/filebeat/ 
7
RUN chmod -R go-w /usr/share/filebeat/


In filebeat_docker directory, create a filebeat.yml file that contains configuration for Filebeat. For this guide, we are going to use a minimal filebeat.yml file.

YAML
xxxxxxxxxx
1
33
 
1
filebeat.inputs:
2
  - type: docker
3
    containers:
4
      path: "/usr/share/dockerlogs/data"
5
      stream: "stdout"
6
      ids:
7
        - "*"
8
      cri.parse_flags: true
9
      combine_partial: true
10
      exclude_files: ['\.gz$'] 
11

          
12
processors:
13
  - add_docker_metadata:
14
      host: "unix:///var/run/docker.sock" 
15

          
16
filebeat.config.modules:
17
  path: ${path.config}/modules.d/*.yml
18
  reload.enabled: false 
19

          
20
output.logstash:
21
  hosts: ["127.0.0.1:5044"] 
22

          
23
log files: 
24
logging.level: error 
25
logging.to_files: false 
26
logging.to_syslog: false 
27
loggins.metrice.enabled: false 
28
logging.files:
29
  path: /var/log/filebeat
30
  name: filebeat
31
  keepfiles: 7
32
  permissions: 0644 
33
ssl.verification_mode: none


Now, it’s time to create the Filebeat Docker image:

Shell
xxxxxxxxxx
1
34
 
1
$ docker build -t filebeatimage . 
2
Sending build context to Docker daemon  3.584kB 
3
Step 1/6 : FROM docker.elastic.co/beats/filebeat:7.5.1 
4
7.5.1: Pulling from beats/filebeat 
5
c808caf183b6: Already exists 
6
a07383b84bc8: Pull complete 
7
a3c8dd4531b4: Pull complete 
8
5547f4a87d0c: Pull complete 
9
d68e041d92cd: Pull complete 
10
7cfb3f76a272: Pull complete 
11
748d7fe7bf07: Pull complete 
12
Digest: sha256:68d87ae7e7bb99832187f8ed5931cd253d7a6fd816a4bf6a077519c8553074e4 
13
Status: Downloaded newer image for docker.elastic.co/beats/filebeat:7.5.1 
14
 ---> 00c5b17745d1 
15
Step 2/6 : COPY filebeat.yml /usr/share/filebeat/filebeat.yml 
16
 ---> f6b75829d8d6 
17
Step 3/6 : USER root
18
 ---> Running in 262c41d7ce58 
19
Removing intermediate container 262c41d7ce58
20
 ---> 1ffcda8f39cf 
21
Step 4/6 : RUN mkdir /usr/share/filebeat/dockerlogs
22
 ---> Running in 8612b1895ac7 
23
Removing intermediate container 8612b1895ac7
24
 ---> 483d29e65dc7 
25
Step 5/6 : RUN chown -R root /usr/share/filebeat/
26
 ---> Running in 4a6ad8b22705 
27
Removing intermediate container 4a6ad8b22705
28
 ---> b779a9da7ac9 
29
Step 6/6 : RUN chmod -R go-w /usr/share/filebeat/
30
 ---> Running in bb9638d12090 
31
Removing intermediate container bb9638d12090
32
 ---> 85ec125594ee 
33
Successfully built 85ec125594ee 
34
Successfully tagged filebeatimage:latest


To verify if the image was built successfully:

Shell
xxxxxxxxxx
1
 
1
$ docker images 
2
REPOSITORY      TAG           IMAGE ID            CREATED             SIZE 
3
filebeatimage   latest        85ec125594ee        7 seconds ago       514MB


For filebeat_elk container, you have created two mounts using the parameter -v;

  • /var/lib/docker/containers:/usr/share/dockerlogs/data: You have mapped host machine docker logs which resides in /var/lib/docker/containers to /usr/share/dockerlogs/data inside the Docker container. Note that you have used :ro which denotes that has read-only permission.
  • Whereas, /var/run/docker.sock is bound with Filebeat container’s Docker daemon, which allows Filebeat container to gather the Docker’s metadata and container logs entries.

Filebeat Installation via DEB

There is an alternate way to install Filebeat in your host machine. At the time of writing, Filebeat version is 7.5.1 you can download the latest version of Filebeat from here.

To install the downloaded .deb file:

Shell
xxxxxxxxxx
1
 
1
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.5.1-amd64.deb 
2

          
3
$ sudo dpkg -i filebeat-7.5.1-amd64.deb


You can find the configuration file in /etc/filebeat/filebeat.yml directory.

Step 2 - Configuring ELK

You can either use a remote server to host your ELK stack or can launch containers within your existing system.

Before you get going, make sure that the following ports are listening:

  • Elasticsearch - Port 9200 and Port 9300
  • Logstash - Port 5044
  • Kibana - Port 5601

ElasticSearch

We are going to use the latest official image of Elasticsearch as of now. So begin by pulling the image from Docker Hub:

Shell
xxxxxxxxxx
1
12
 
1
$ docker pull docker.elastic.co/elasticsearch/elasticsearch:7.5.1 
2
7.5.1: Pulling from elasticsearch/elasticsearch 
3
c808caf183b6: Already exists 
4
05ff3f896999: Pull complete 
5
82fb7fb0a94e: Pull complete 
6
c4d0024708f4: Pull complete 
7
136650a16cfe: Pull complete 
8
968db096c092: Pull complete 
9
42547e91692f: Pull complete 
10
Digest: sha256:b0960105e830085acbb1f9c8001f58626506ce118f33816ea5d38c772bfc7e6c 
11
Status: Downloaded newer image for docker.elastic.co/elasticsearch/elasticsearch:7.5.1 
12
docker.elastic.co/elasticsearch/elasticsearch:7.5.1


Now, create a directory name as docker_elk, where all your configuration files and Dockerfile will reside:

$ mkdir docker_elk && cd $_

Inside docker_elk, create another directory for elasticsearch and create a Dockerfile and elasticsearch.yml files:

Shell
xxxxxxxxxx
1
 
1
$ mkdir elasticsearch && cd $_ 
2
$ touch Dockerfile && touch elasticsearch.yml


Open elasticsearch.yml file in your preferred text editor and copy the configuration setting as it is:

YAML
xxxxxxxxxx
1
 
1
--- 
2
cluster.name: "docker-cluster" 
3
network.host: 0.0.0.0 
4

          
5
xpack.license.self_generated.type: basic 
6
xpack.security.enabled: true 
7
xpack.monitoring.collection.enabled: true


Note that you can set xpack.license.self_generated.type from basic to trial if you wish to evaluate the commercial feature of x-pack for 30 days.

Open Dockerfile in your preferred text editor and copy the below-mentioned lines and paste it as it is:

Dockerfile
xxxxxxxxxx
1
 
1
FROM docker.elastic.co/elasticsearch/elasticsearch:7.5.1 
2
COPY --chown=elasticsearch:elasticsearch ./elasticsearch.yml /usr/share/elasticsearch/config/


The command chown is to change the file owner to elasticsearch as of other files in container.

Kibana

Now, you are going to setup Dockerfile for Kibana, and again you have to pull the latest image from the Elastic Docker registry:

Dockerfile
xxxxxxxxxx
1
15
 
1
$ docker pull docker.elastic.co/kibana/kibana:7.5.1 
2
7.5.1: Pulling from kibana/kibana 
3
c808caf183b6: Already exists 
4
e12a414b7b04: Pull complete 
5
20714d0b39d8: Pull complete 
6
393e0a5bccf2: Pull complete 
7
b142626e938b: Pull complete 
8
b28e35a143ca: Pull complete 
9
728725922476: Pull complete 
10
96692e1a8406: Pull complete 
11
e4c3cbe1dbbe: Pull complete 
12
bb6fc46a19d1: Pull complete 
13
Digest: sha256:12b5e37e0f960108750e84f6b2f8acce409e01399992636b2a47d88bbc7c2611 
14
Status: Downloaded newer image for docker.elastic.co/kibana/kibana:7.5.1 
15
docker.elastic.co/kibana/kibana:7.5.1


Inside your docker_elk, create a directory, and inside of it, you have to create a Dockerfile and kibana.yml files:

Shell
xxxxxxxxxx
1
 
1
$ mkdir kibana && cd $_ 
2
$ touch Dockerfile && touch kibana.yml


kibana.yml will consist of follow configurations. Note that you have to change the values of elasticsearch.user and elasticsearch.password:

YAML
xxxxxxxxxx
1
 
1
--- 
2
server.name: kibana 
3
server.host: "0" 
4
elasticsearch.hosts: [ "http://elasticsearch:9200" ] 
5
xpack.monitoring.ui.container.elasticsearch.enabled: true 
6

          
7
elasticsearch.username: elastic 
8
elasticsearch.password: yourstrongpasswordhere


Whereas, in Dockerfile, will look something like this:

Dockerfile
xxxxxxxxxx
1
 
1
FROM docker.elastic.co/kibana/kibana:7.5.1 
2
COPY ./kibana.yml /usr/share/kibana/config/


Logstash

The container image for Logstash is available from the Elastic Docker registry. Again at the time of writing current version is 7.5.1, you can find latest version of Logstash here.

Shell
xxxxxxxxxx
1
16
 
1
$ docker pull docker.elastic.co/logstash/logstash:7.5.1 
2
7.5.1: Pulling from logstash/logstash 
3
c808caf183b6: Already exists 
4
7c07521065ed: Pull complete 
5
d0d212a3b734: Pull complete 
6
418bd04a229b: Pull complete 
7
b22f374f97b1: Pull complete 
8
b65908943591: Pull complete 
9
2ee12bfc6e9c: Pull complete 
10
309701bd1d88: Pull complete 
11
b3555469618d: Pull complete 
12
2834c4c48906: Pull complete 
13
bae432e5da20: Pull complete 
14
Digest: sha256:5bc89224f65459072931bc782943a931f13b92a1a060261741897e724996ac1a 
15
Status: Downloaded newer image for docker.elastic.co/logstash/logstash:7.5.1 
16
docker.elastic.co/logstash/logstash:7.5.1


Now, create a directory for Logstash inside docker_elk and add necessary files as shown below:

Shell
xxxxxxxxxx
1
 
1
$ mkdir logstash && cd $_ 
2
$ touch Dockerfile && touch logstash.yml


Copy below mentioned line into logstash.yml. Make sure that you enter the right username and password in xpack.monitoring.elasticsearch.username and xpack.monitoring.elasticsearch.password respectively:

YAML
xxxxxxxxxx
1
 
1
--- 
2
http.host: "0.0.0.0" 
3
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ] 
4

          
5
xpack.monitoring.enabled: true 
6
xpack.monitoring.elasticsearch.username: elastic 
7
xpack.monitoring.elasticsearch.password: yourstrongpasswordhere


Now, add following lines into your Dockerfile:

Dockerfile
xxxxxxxxxx
1
 
1
FROM docker.elastic.co/logstash/logstash:7.5.1 
2
COPY ./logstash.yml /usr/share/logstash/config/ 
3
COPY ./logstash.conf /usr/share/logstash/pipeline/


Apart from this, you have to create a logstash.conf file. Here in elasticsearch reference you will find host, user and password, make sure you change the values as per your system:

Plain Text
xxxxxxxxxx
1
13
 
1
input {
2
    tcp {
3
    port => 5000
4
    codec => json
5
  } 
6
}
7
output {
8
  elasticsearch {
9
    hosts => "elasticsearch:9200"
10
    user => elastic
11
    password => yourstrongpasswordhere
12
  } 
13
}


As you are through with the setup of your stack's components, the directory structure of your project should should look something like this:

Plain Text
xxxxxxxxxx
1
13
 
1
.
2
 ├── elasticsearch
3
 │   ├── Dockerfile
4
 │   └── elasticsearch.yml
5
 ├── kibana
6
 │   ├── Dockerfile
7
 │   └── kibana.yml
8
 └── logstash
9
    ├── Dockerfile
10
    ├── logstash.conf
11
    └── logstash.yml 
12

          
13
3 directories, 7 files


Now, it’s time to create a Docker Compose file, which will let you run the stack.

Step 3 - Docker Compose

Create a docker-compose.yml file in the docker_elk directory. Here you are going to define and run your multi-container application consist of Elasticsearch, Kibana, and Logstash.

You can copy the below-mentioned context in your docker-compose.yml file. Please make sure that you change the ELASTIC_PASSWORD and ES_JAVA_OPTS values. For this guide, ES_JAVA_OPTS is set to 256 MB, but in real world scenarios you might want to increase the heap size as per requirement.

YAML
xxxxxxxxxx
1
48
 
1
version: '3.2' 
2

          
3
services: 
4
 elasticsearch:
5
   build:
6
     context: elasticsearch/
7
   volumes:
8
     - type: volume
9
       source: elasticsearch
10
       target: /usr/share/elasticsearch/data
11
   ports:
12
     - "9200:9200"
13
     - "9300:9300"
14
   environment:
15
     ES_JAVA_OPTS: "-Xmx256m -Xms256m"
16
     ELASTIC_PASSWORD: yourstrongpasswordhere
17
     discovery.type: single-node
18
   networks:
19
     - elk_stack  
20

          
21
logstash:
22
   build:
23
     context: logstash/
24
   ports:
25
     - "5000:5000"
26
     - "9600:9600"
27
   environment:
28
     LS_JAVA_OPTS: "-Xmx256m -Xms256m"
29
   networks:
30
     - elk_stack
31
   depends_on:
32
     - elasticsearch  
33

          
34
kibana:
35
   build:
36
     context: kibana/
37
   ports:
38
     - "5601:5601"
39
   networks:
40
     - elk_stack
41
   depends_on:
42
     - elasticsearch 
43

          
44
networks: elk_stack:
45
   driver: bridge
46

          
47
volumes: 
48
  elasticsearch:


Now, to build the ELK stack, you have to run the following command in your docker_elk directory:

Shell
xxxxxxxxxx
1
 
1
$ docker-compose up -d 
2
Starting elastic_elk ... done 
3
Starting kibana_elk   ... done 
4
Starting logstash_elk ... done


To ensure that the pipeline is working all fine, run the following command to see the Elasticsearch indices:

Shell
xxxxxxxxxx
1
23
 
1
$ curl 'localhost:9200/_cat/indices?v' -u elastic:yourstrongpasswordhere 
2

          
3
health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size 
4

          
5
green  open   .triggered_watches                m-l01yMmT7y2PYU4mZ6-RA   1   0          0            0      6.5kb          6.5kb 
6
green  open   .watcher-history-10-2020.01.10    SX3iYGedRKKCC6JLx_W8fA   1   0       1523            0        2mb            2mb 
7
green  open   .management-beats                 ThHV2q9iSfiYo__s2rouIw   1   0          6            1     40.5kb         40.5kb 
8
green  open   .ml-annotations-6                 PwK7Zuw7RjytoWFuCCulJg   1   0          0            0       283b           283b 
9
green  open   .monitoring-kibana-7-2020.01.10   8xVnx0ksTHShds7yDlHQvw   1   0       1006            0    385.4kb        385.4kb 
10
green  open   .monitoring-es-7-2020.01.10       CZd89LiNS7q-RepP5ZWhEQ   1   0      36412          340     16.4mb         16.4mb 
11
green  open   .apm-agent-configuration          e7PRBda_QdGrWtV6KECsMA   1   0          0            0       283b           283b 
12
green  open   .ml-anomalies-shared              MddTZQ7-QBaHNTSmOtUqiQ   1   0          1            0      5.5kb          5.5kb 
13
green  open   .kibana_1                         akgBeG32QcS7AhjBOed3LA   1   0       1105           28    687.1kb        687.1kb 
14
green  open   .ml-config                        CTLI-eNdTkyBmgLj3JVrEA   1   0         22            0     56.6kb         56.6kb 
15
green  open   .ml-state                         gKx28CMGQiuZyx82bNUoYg   1   0          0            0       283b           283b 
16
green  open   .security-7                       krH4NlJeThyQRA-hwhPXEA   1   0         36            0     83.6kb         83.6kb 
17
green  open   .logstash                         7wxswFtbR3eepuWZHEIR9w   1   0          0            0       281b           281b 
18
green  open   .kibana_task_manager_1            ft60q2R8R8-nviAyc0caoQ   1   0          2            1     16.2kb         16.2kb 
19
yellow open   filebeat-7.5.1-2020.01.10-000001  1-RGhyG9Tf-wGcepQ49mmg   1   1          0            0       283b           283b 
20
green  open   .monitoring-alerts-7              TLxewhFyTKycI9IsjX0iVg   1   0          6            0     40.9kb         40.9kb 
21
green  open   .monitoring-logstash-7-2020.01.10 dc_S5BhsRNuukwTxbrxvLw   1   0       4774            0      1.1mb          1.1mb 
22
green  open   .watches                          x7QAcAQZTrab-pQuvonXpg   1   0          6            6    120.2kb        120.2kb 
23
green  open   .ml-notifications-000001          vFYzmHorTVKZplMuW7VSmw   1   0         52            0     81.6kb         81.6kb


Now, it is time to pay a visit to our Kibana dashboard. Open your browser and enter the URL http://your-ip-addr-here:5601. Now enter the predefined username and password; in our case, it is elastic and yourstrongpasswordhere, respectively.

In your Kibana dashboard, go to the Management tab, and under Kibana, click on Index Patterns. In the first row, you will find the filebeat-* index, which already has been identified by Kibana.

Now, go to the Discover tag on the Kibana dashboard and view your container logs along with the metadata under the selected index pattern, which could look something like this:

Conclusion

You have now installed and configured the ELK Stack on your host machine, which is going to collect the raw log from your Docker into the stack that later can be analyzed or can be used to debug applications.

Docker (software) Kibana shell Directory Elasticsearch

Published at DZone with permission of Sudip Sengupta. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • 3 Easy Steps for a (Dev)Containerized Microservice With Jolie and Docker
  • EFK Stack on Kubernetes (Part 1)
  • Java High Availability With WildFly on Kubernetes
  • Building a Cost-Effective ELK Stack for Centralized Logging

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!