DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Leveraging Salesforce Using a Client Written In Vue.js
  • Spring Boot Application With Kafka, Elasticsearch, Redis With Enterprise Standards Part 1
  • Spring2quarkus — Spring Boot to Quarkus Migration
  • Building Mancala Game in Microservices Using Spring Boot (Part 2: Mancala API Implementation)

Trending

  • A Guide to Container Runtimes
  • Solid Testing Strategies for Salesforce Releases
  • Internal Developer Portals: Modern DevOps's Missing Piece
  • Unlocking AI Coding Assistants Part 2: Generating Code
  1. DZone
  2. Coding
  3. Frameworks
  4. Sprinkle Some ELK on Your Spring Boot Logs

Sprinkle Some ELK on Your Spring Boot Logs

By 
Ion Pascari user avatar
Ion Pascari
·
Mar. 25, 20 · Tutorial
Likes (15)
Comment
Save
Tweet
Share
30.3K Views

Join the DZone community and get the full member experience.

Join For Free

 

One day, I heard about the ELK stack and about its advantages, so I decided to get my hands on it. Unfortunately, I struggled to find solid documentation and supplemental content on getting started. So, I decided to write my own.

To start, let's get familiar with what the ELK stack is:

"ELK" is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.
https://www.elastic.co/what-is/elk-stack

So basically,

  • Elasticsearch takes care of the storage and manages searching and analytics via REST endpoints.
  • Logstash is the “pac-man” which absorbs, filters, and sends data.
  • Kibana is responsible for the fancy way of viewing the results.

For the starters, we need to download the stack (I'm on Windows):

  1. Elasticsearch - https://www.elastic.co/downloads/elasticsearch,
  2. Logstash - https://www.elastic.co/downloads/logstash (the zip archive).
  3. Kibana - https://www.elastic.co/downloads/kibana.

Once downloaded, you can unzip them right away. We'll come to them later. Now, we are going to set up a simple Spring Boot project that'll generate for some logs we can look at later. Here are the dependencies that we are going to need at the moment:

XML
 




x
10


 
1
<dependency>
2
     <groupId>org.springframework.boot</groupId>
3
     <artifactId>spring-boot-starter-actuator</artifactId>
4
</dependency>
5

          
6
<dependency>
7
     <groupId>org.projectlombok</groupId>
8
     <artifactId>lombok</artifactId>
9
     <optional>true</optional>
10
</dependency>


 

As simple as it is, I have decided to go with Actuator, as it'll be a fast setup to get some logs. Now, let's create a service named  ActuatorMetricsService that will be responsible for our logs.

Java
xxxxxxxxxx
1
33
 
1
@Service
2
@Slf4j
3
public class ActuatorMetricsService {
4

          
5
    private final MetricsEndpoint metricsEndpoint;
6
    private final HealthEndpoint healthEndpoint;
7
    private final InfoEndpoint infoEndpoint;
8

          
9
    @Autowired
10
    public ActuatorMetricsService(MetricsEndpoint metricsEndpoint, HealthEndpoint healthEndpoint, InfoEndpoint infoEndpoint) {
11
        this.metricsEndpoint = metricsEndpoint;
12
        this.healthEndpoint = healthEndpoint;
13
        this.infoEndpoint = infoEndpoint;
14
    }
15

          
16
    @Scheduled(initialDelay = 6000, fixedDelay = 60000)
17
    public void fetchMetrics() {
18
        metricsEndpoint.listNames().getNames().forEach(n -> {
19
            log.info(n + " = " + metricsEndpoint.metric(n, Collections.emptyList()).getMeasurements());
20
        });
21
    }
22

          
23
    @Scheduled(initialDelay = 6000, fixedDelay = 30000)
24
    public void fetchHealth() {
25
        HealthComponent health = healthEndpoint.health();
26
        log.info("health = {}" , health.getStatus());
27
    }
28

          
29
    @Scheduled(initialDelay = 6000, fixedDelay = 60000)
30
    public void fetchInfo() {
31
        infoEndpoint.info().forEach((k, v) -> log.info(k + " = " + v));
32
    }
33
}

 

There are a few things to mention about this class if you are not familiar with what you see:

  • @Slf4j — Creates  private static final org.slf4j.Logger log = 
    org.slf4j.LoggerFactory.getLogger(LogExample.class) to be used later in our code.
  • MetricsEndpoint, HealthEndpoint, InfoEndpoint — classes from actuator used to expose information about metrics, application health, and application information.
  • @Scheduled — Annotation that marks a method to be scheduled. For this to work, you will have to enable scheduling with @EnableScheduling.

We will get metrics and app information every minute with an initial delay of six seconds and app health every 30 seconds with an initial delay of six seconds. 

Now, let's get to setting up our logs. For that, we'll have to create the logback.xml in the resources directory.

XML
xxxxxxxxxx
1
26
 
1
<?xml version="1.0" encoding="UTF-8"?>
2
<configuration>
3
    <property scope="context" name="log.fileExtension" value="log"></property>
4
    <property scope="context" name="log.directory" value="/logs"></property>
5
    <property scope="context" name="log.fileName" value="bootiful-elk"></property>
6

          
7
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
8
        <layout class="ch.qos.logback.classic.PatternLayout">
9
            <pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] [${HOSTNAME}] [%thread] %level %logger{36}@%method:%line - %msg%n</pattern>
10
        </layout>
11
    </appender>
12

          
13
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
14
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
15
            <fileNamePattern>${log.directory}/${log.fileName}.%d{yyyy-MM-dd}.${log.fileExtension}</fileNamePattern>
16
        </rollingPolicy>
17
        <encoder>
18
            <pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] [${HOSTNAME}] [%thread] %level %logger{36}@%method:%line - %msg%n</pattern>
19
        </encoder>
20
    </appender>
21

          
22
    <root level="INFO">
23
        <appender-ref ref="STDOUT"></appender>
24
        <appender-ref ref="FILE"></appender>
25
    </root>
26
</configuration>

 

This will get us the logs both in our console and in a file. Running the application now should give this kind of output, of course with different/same values:

[2020-03-22 20:28:25.256] [LT-IPASCARI] [scheduling-1] INFO c.e.l.b.s.ActuatorMetricsService@fetchHealth:39 - health = UP

[2020-03-22 20:28:26.262] [LT-IPASCARI] [scheduling-1] INFO c.e.l.b.s.ActuatorMetricsService@lambda$fetchMetrics$0:32 - jvm.memory.max = [MeasurementSample{statistic=VALUE, value=5.577900031E9}]

...

[2020-03-22 20:28:26.716] [LT-IPASCARI] [scheduling-1] INFO c.e.l.b.s.ActuatorMetricsService@lambda$fetchMetrics$0:32 - process.start.time = [MeasurementSample{statistic=VALUE, value=1.584901694856E9}]

[2020-03-22 20:28:26.719] [LT-IPASCARI] [scheduling-1] INFO c.e.l.b.s.ActuatorMetricsService@lambda$fetchInfo$1:44 - app = {name=bootiful-elk, description=Demo project for Spring Boot, version=0.0.1-SNAPSHOT, encoding=UTF-8, java={version=1.8.0_171}}


The same logs should be in the file too. Now that we've set up our application, it's time to get to the ELK. I am going to discuss two methods of getting your logs to be seen in Kibana:

  1. Tell Logstash to look into your log file(s).
  2. Tell Logstash to listen for log entries.

We'll start with the first one as it is the most difficult from my point of view to set up. 

First of all, you need to know that there are three important parts of Logstash:

  • Input plugin — enables a specific source of events to be read by Logstash. You can check out all of them here (https://www.elastic.co/guide/en/logstash/current/input-plugins.html), but we are going with file for this demo.
  • Filter plugin — performs intermediary processing on an event. Filters are often applied conditionally, depending on the characteristics of the event. Again, here,  https://www.elastic.co/guide/en/logstash/current/filter-plugins.html, you can find lots of them, but we will use a few of them, such as  grok,  date,  mutate
  • Output plugin — sends event data to a particular destination. Outputs are the final stage in the event pipeline. Here are all the plugins https://www.elastic.co/guide/en/logstash/current/output-plugins.html, but in this demo, we will make use of elasticsearch and stdout.

Now that we've got the basics, let's get our hands dirty. Find your unzipped Logstash folder and get to the config folder (logstash-7.6.1\config) and create a file named logstash-file.conf with this content:

Java
xxxxxxxxxx
1
15
 
1
input {
2
    file {
3
        path => "D:/logs/*.log"
4
        codec => "plain"
5
        type => "logback"
6
    }
7
}
8

          
9
output {
10
    stdout { }
11
    elasticsearch {
12
        hosts => ["localhost:9200"]
13
        index => "bootiful-elk-file-%{+YYYY.MM.dd}"
14
    }
15
}

 

Let's take a look at what we have to do here as it is very simple for the moment. 

  • input plugin
    • file — that means we are dealing with a file as an input
      • path - this is the absolute path to your log file(s). This is a required setting.
      • codec - convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline. The default one is plain, but I still used this one, as it is important.
      • type - is stored as part of the event itself, so you can also use the type to search for it in Kibana.
  • output plugin
    • stdout - a simple output that prints to the STDOUT of the shell running Logstash. This output can be quite convenient when debugging plugin configurations. Default codec is rubydebug: outputs event data using the ruby "awesome_print" library
    • elasticsearch - if you plan to use the Kibana web interface, use the Elasticsearch output plugin to get your log data into Elasticsearch.
      • hosts - sets the host(s) of the remote instance. In this case, when we'll launch Elasticsearch locally, it'll be on 127.0.0.1:9200.
      • index - The index to write events to. This can be dynamic using the %{foo} syntax. The default value, which is " logstash-%{+YYYY.MM.dd} " will partition your indices by day so you can more easily delete old data or only search specific date ranges.

Okay, now that we understand what we have in logstash-file.conf it is time to start our services and view them. We will start in the following order :

  1.  Elasticsearch — elasticsearch-7.6.1\bin\elasticsearch.bat. You can check if started with with  curl -G 127.0.0.1:9200.
  2.  Kibana — kibana-7.6.1-windows-x86_64\bin\kibana.bat. It'll start the Kibana web interface on  127.0.0.1:5601. You can go ahead and check it out (skip all the tutorials).
  3.  Logstash — logstash-7.6.1\bin and from the command line run the following command to pick up the created configuration logstash -f ../config/logstash-file.conf.
  4.  Application - run you Spring application.

We got our streaming of logs set up, but we want to check them out, right? To do that, we have to link our Elasticsearch index to Kibana. To do that, go to your Kibana web interface and check out the Index Patterns and click on Create index param. From there you will see our defined pattern   bootiful-elk-file-2020.03.22   our defined pattern, write it in the index pattern section and click Next step, then you can add some settings, for now click I don`t want to use the Time Filter and finish the setup. From that point you can go to Discover and check out your logs. 


Initial logs

You should see something like that. If not, play around with the available filter fields. But you can see that something is not quite right, we got an entire log line as the message, which is not really useful for us as we can not make use of other fields from that log line. For that purpose we need to bring on the filter plugin which is going to break down our log line into separate fields which can be used as filter fields, sorting fields and in KQL (Kibana Query Language). 

Let`s modify our configuration to look like this 

Java
xxxxxxxxxx
1
38
 
1
input {
2
    file {
3
        path => "D:/logs/*.log"
4
        codec => "plain"
5
        type => "logback"
6
    }
7
}
8

          
9
filter {
10
  if [message] =~ "\tat" {
11
    grok {
12
      match => ["message", "^(\tat)"]
13
      add_tag => ["stacktrace"]
14
    }
15
  }
16

          
17
  grok {
18
    match => [ "message",
19
               "(?m)\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{HOSTNAME:host}\] \[%{DATA:thread}\] %{LOGLEVEL:logLevel} %{DATA:class}@%{DATA:method}:%{DATA:line} \- %{GREEDYDATA:msg}"
20
             ]
21
  }
22

          
23
  date {
24
    match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
25
  }
26

          
27
  mutate {
28
    remove_field => ["message"]
29
  }
30
}
31

          
32
output {
33
    stdout { codec => rubydebug }
34
    elasticsearch {
35
        hosts => ["localhost:9200"]
36
        index => "bootiful-elk-file-%{+YYYY.MM.dd}"
37
    }
38
}

 

Now let`s discuss about what is new here

  • filter plugin
    • if construction - we check that if our message contains a tab and at we will qualify it as a stacktrace, so we will add the corresponding tag.
    • grok - Grok is a great way to parse unstructured log data into something structured and queryable. This tool is perfect for syslog logs, apache and other webserver logs, mysql logs, and in general, any log format that is generally written for humans and not computer consumption. Make sure that your match pattern is valid corresponding to your log`s pattern, you can go ahead and validate them with online grok tool validators.
    • date - is used for parsing dates from fields, and then using that date or timestamp as the logstash timestamp for the event.
    • mutate  - allows you to perform general mutations on fields. You can rename, remove, replace, and modify fields in your events. Here I wanted to remove the "message" fields as I broke it down in little pieces with Grok so I did not want it to just lay around.

Okay, let`s restart our Logstash - logstash-7.6.1\bin  and from the command line run the following command to pick up the created configuration  logstash -f ../config/logstash-file.conf  and see what do we have in Kibana now, you should see something like this.


Now we got a proper output, you can observe that our long message line got broken into many separate fields which can be used now for an effective search and analysis. That is pretty much it about the first method, you can go ahead and explore more filtering options, ways of customizing your data, trying out different codecs for specific plugins etc.

About the second method, about pushing your log lines directly from Spring Boot application to Logstash -> Elasticsearch -> Kibana. For that we will need to add one more dependency to our  pom.xml  so it looks like this

XML
 
xxxxxxxxxx
1
17
 
1
<dependency>
2
   <groupId>org.springframework.boot</groupId>
3
   <artifactId>spring-boot-starter-actuator</artifactId>
4
</dependency>
5

          
6
<dependency>
7
   <groupId>net.logstash.logback</groupId>
8
   <artifactId>logstash-logback-encoder</artifactId>
9
   <version>6.3</version>
10

          
11
</dependency>
12

          
13
<dependency>
14
   <groupId>org.projectlombok</groupId>
15
   <artifactId>lombok</artifactId>
16
   <optional>true</optional>
17
</dependency>


Now that we got our dependency, we can go ahead and modify our logback.xml too.

XML
xxxxxxxxxx
1
36
 
1
<?xml version="1.0" encoding="UTF-8"?>
2
<configuration>
3
    <property scope="context" name="log.fileExtension" value="log"/>
4
    <property scope="context" name="log.directory" value="/logs"/>
5
    <property scope="context" name="log.fileName" value="bootiful-elk"/>
6

          
7
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
8
        <layout class="ch.qos.logback.classic.PatternLayout">
9
            <pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] [${HOSTNAME}] [%thread] %level %logger{36}@%method:%line - %msg%n</pattern>
10
        </layout>
11
    </appender>
12

          
13
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
14
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
15
            <fileNamePattern>${log.directory}/${log.fileName}.%d{yyyy-MM-dd}.${log.fileExtension}</fileNamePattern>
16
        </rollingPolicy>
17
        <encoder>
18
            <pattern>[%d{yyyy-MM-dd HH:mm:ss.SSS}] [${HOSTNAME}] [%thread] %level %logger{36}@%method:%line - %msg%n</pattern>
19
        </encoder>
20
    </appender>
21

          
22
    <appender name="STASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
23
        <destination>127.0.0.1:5000</destination>
24
        <!-- encoder is required -->
25
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
26
        <keepAliveDuration>5 minutes</keepAliveDuration>
27
    </appender>
28

          
29
    <root level="INFO">
30
        <appender-ref ref="STDOUT"/>
31
        <appender-ref ref="STASH"/>
32
        <appender-ref ref="FILE"/>
33
    </root>
34
</configuration>
35

          
36
 

 


You might notice what has changed: a new appender appeared in logback.xml, that is LogstashTcpSocketAppender and  LogstashEncoder, which will encode in JSON our log lines and send them via TCP to  127.0.0.1:5000. 

Now that we know the format of our log lines and their destination, we have to set up a destination, which will listen for JSON events on  127.0.0.1:5000. For that, we'll create a new file in Logstash's config folder (logstash-7.6.1\config) and create a file named logstash-tcp.conf with this content:

Plain Text
 
xxxxxxxxxx
1
14
 
1
input {
2
    tcp {
3
        port => "5000"
4
        codec => json_lines
5
    }
6
}
7

          
8
output {
9
    stdout {}
10
    elasticsearch {
11
        hosts => ["http://localhost:9200"]
12
        index => "bootiful-elk-tcp-%{+YYYY.MM.dd}"
13
  }
14
}


As said, I set up a destination, which is, in fact, the source/input of Logstash, tcp input plugin with port 5000 and codec json_lines. The output, as you might observe, is the same, with one important change. I did not want to flood the file logs container created previously with these log lines, so I decided on using a different index, bootiful-elk-tcp-%{+YYYY.MM.dd} . Also, I am not using any filter plugin here, as the message is encoded/decoded from JSON, which already breaks the log line into pieces. 

Okay, now we can restart our Logstash —  logstash-7.6.1\bin and from the command line run the following command to pick up the created configuration logstash -f ../config/logstash-tcp.conf. Restart the Spring Boot application and see what we have in Kibana now. (Don't forget to repeat the steps of creating a new Index Pattern.) You should see something like this:

Example Kibana output

Notice that we got a bit different fields, but close to the ones that we`ve been mapping previously with Grok.

All right, basically that is it. You can find these 2 .conf files in the resources folder of my project and other related files to this article here https://github.com/theFaustus/bootiful-elk.

Spring Framework Spring Boot Kibana Elasticsearch Data processing file IO application Event Filter (software)

Opinions expressed by DZone contributors are their own.

Related

  • Leveraging Salesforce Using a Client Written In Vue.js
  • Spring Boot Application With Kafka, Elasticsearch, Redis With Enterprise Standards Part 1
  • Spring2quarkus — Spring Boot to Quarkus Migration
  • Building Mancala Game in Microservices Using Spring Boot (Part 2: Mancala API Implementation)

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!