Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Log Management With the ELK Stack on Windows Server — Part 3 — Customization

DZone's Guide to

Log Management With the ELK Stack on Windows Server — Part 3 — Customization

This series on the ELK stack for log management in performance testing culminates with a tutorial on customization.

· Performance Zone ·
Free Resource

Sensu is an open source monitoring event pipeline. Try it today.

In this post, I will explain how to customize Logstash to read and filter your custom log files so you can visualize them in Kibana. In Part 1, I mentioned that Logstash has three stages: Input Plugin, Filter Plugin, and Output Plugin. I also mentioned that, in Input Plugin, we can use a different type of input plugins, for example, file, beats, http, etc. In Part 2, I briefly showed how to use beats to visualize different Windows logs and packet logs via winlogbeat and packetbeat respectively.

You could ask, "Why customize Logstash for reading log files if we have a Filebeat log shipper shipping different kinds of logs into the ELK Stack?"

Let me answer. Of course, it isn't necessary to customize Logstash. You can use Filebeat input instead of file input. But, take into consideration the following points:

  1. Filebeat comes with internal modules (Apache, NGINX, MySQL, etc.) that simplify the collection, parsing, and visualization of common log formats down to a single command. So, if you use log files which generated by one of them, it is enough to use filebeat input plugin. And as a result, you could able to visualize your data as you want.
  2. Filebeat DOES NOT FILTER. This means that if your log file (e.g. Apache) contains any specific content you need to parameterize, you simply can't filter it.
  3. If your log file contains specific content you want to parameterize, I recommend you use the "file" input plugin. Logstash will go through your custom filter plugin and add your parameters as fields so that, in Kibana, you will be able to search or visualize it as you want.

Let's get started.

Step 1: Change the Logstash "Input" Plugin

If you remember from Part 2, our "logstash.json" looks as follows:

“logstash.json”

Add the "file" input plugin, instead of "bean" (you can comment it out or simply delete it).

file {
type => "mylog"
path => "C:/programdata/elasticsearch/custom-logs-dir/custom-*.log"
mode => "tail"
start_position => "beginning"
}

I defined a few parameters, like

  • path: From where "Logstash" will be reading log files.
  • type: Types are used mainly to filter your different inputs. Add a type field to all events handled by this input.
  • mode: What mode you want the file input to operate in.
  • start_position: Choose where Logstash starts initially reading files — at the beginning or at the end. The value can be [beginning, end].

Step 2: View Custom Log File

Let's check out our custom log file (you can download it at this link) and copy this file to a directory defined above, which we set to the "path" parameter (in "input plugin"). My log file is named "custom-text-log.log". It will be acceptable under pattern "custom-*.log". My log file looks as follows:

Image title

Each line has the following structure (separated by "\t"):

* Date
* Time
* Message
* Argument 1
* Argument 2
* Math operation
* Rest parameters (only in one case)

In the next step, we will use the filter plugin. We will do different operations like check a condition, set patterns, add fields with specific values, etc.

Step 3: Configure the "Filter" Plugin

Let's see how the updated "logstash.json" file looks:

“logstash.json” file

We have "input plugin" (which will read files from the defined path), "filter plugin" (which will filter our custom logs) and "output plugin" (which will send filtered information to the destination, in this case, Elasticsearch). Here, we will discuss the "filter plugin" in more detail.

1) In the filter block, first what we see is main if the condition is

if ([type] == “mylog”) {
 …
}

The input plugin can read data from many sources and we can set arbitrary name for distinct each of them. In our case, it was "mylog". In the future, if we need to add another input source, we can name it (e.g. "yourlog") and define another filter strategy for this type of logs.

2) Line 12-20: "Grok" filter plugin

From the official documentation: "Parse arbitrary text and structure it. Grok is a great way to parse unstructured log data into something structured."

3) Line 14-18: "match" option

In the Grok filter, I used the "match" grok filter configuration option. Each line of your log file will be saved as a "message" field. The "match" option checks the message content for each of defined patterns, and if it is successful, "match" adds the information to a specified field. In our case, we have three patterns (line 15-17).

We will use different patterns for each of the three different types of lines below:

piece of content from log file

Line 15: Let's open each part of this pattern.

(?<DateTime>%{MONTHDAY}-%{MONTH}-%{YEAR} %{TIME} %{WORD})

A pattern defined in brackets, will be considered as one field. We set to the field name ("DateTime") inside "<" and ">". And after ">", we will define the field pattern. For detailed information about grok patterns, you can read from the documentation mentioned in the references below.

(?<Message>%{WORD} %{WORD} %{WORD})

Again, I use brackets to define information under one field, and this time, I named it "Message". This pattern will catch every line, which will have three words separated by "space" (in our case, "one more message").

%{NUMBER:Arg1}

After the Message data, the pattern takes a number (in this case, 59) and sets it to the field I defined as "Arg1".

%{NUMBER:Arg2}

After the Arg1 data, the pattern takes a number (in this case, 51) and sets it to the field I defined as "Arg2".

%{WORD:Operation}

After the Arg2 data, the pattern took string information (e.g. "multiplication") and set it to the field I defined as "Operation."

%{GREEDYDATA:params}

"GREEDYDATA" is used for getting the rest of the information. In our case, after the word "multiplication" in the log file, I was getting the rest of the string (with "GREEDYDATA") and set it to the field named "params."

Line 16: Has the same structure and fields, but with the following difference:

(?<Message>%{WORD} %{WORD})

Instead of three words, I was getting messages with two words (in our case, "another message"). Also, we didn't use "GREEDYDATA" in this case.

Line 17: Has the same structure and fields, but with the following difference:

(?<Message>%{WORD})

Instead of 2 or 3 words, I was getting messages with only one word (in our case, "custommessage"). Here also, we didn't use "GREEDYDATA" in this case.

4) Line 22-44: if conditions for patterns.

In this block, I was checking the Message field to define which code block will work (line 22, 28 and 40). Each of them do different operations:

Line 22: If the Message field has the value "custommessage," then the first block will work (line 22-26).

In this block, I simply added a new field named "NewField" and set the value to "Hello." To add a new field, I used the "add_field" option in the "mutate" filter plugin (you can read about "mutate" and other filter plugins in the documentation mentioned in the references below).

Line 28: If the Message field has the value "another message," then the second block will be working (line 28-38).

In this block, I was using "mutate" plugin. But before adding a new field, I was using another filter plugin ("split"). This plugin will be split "Message" field (as a separator, defined "space") and add each element of array as new fields.

Line 40: If the Message field has the value "one more message," then the third block will work (line 40-44). In this block, I used another "grok" plugin and match "params" field, which was defined in line 15 at the end of the pattern ("%{GREEDYDATA:params}")

Line 44: In this line, I used another "mutate" plugin. In this case, it removes the "params" field because, as a result, we will not use this field, and we will not use it in Kibana.

Step 4: Save & Restart

Before we start to work with Kibana, let's save the configuration file and restart the Logstash service. Let's go through a few notes:

  1. After restarting the services, you can check Elasticsearch indices with the link http://127.0.0.1:9200/_cat/indices?v&pretty. With this link, you will be able to see which indices were created and check other information (index, status, uuid, etc.).
  2. If you didn't find an index, which you define in this list, please check the Logstash log file. Maybe something went wrong. The log file path will be "C:\ELK\logstash\logs\logstash-plain.log" (in my case).

Cheers! We are finished. Now, you can open the “Discover” menu in Kibana and see your result. In my case, it looks as follows:

Kibana, Discover menu

Now you will be able to filter your logs and visualize them as you want. For example:

Kibana, Visualize menu

Conclusion

That’s the end. Thank you for your time. I hope this part will help you to better understand the process of transformation of custom logs. Please feel free to share your comments and feedback with me.

References:

  1. http://sysadvent.blogspot.com/2016/12/day-12-logstash-fundamentals.html

  2. https://logz.io/blog/filebeat-tutorial/

  3. File input plugin (documentation)

  4. Grok Patterns

  5. Filter plugin

  6. Grok Debugger

Sensu: workflow automation for monitoring. Learn more—download the whitepaper.

Topics:
elasticseach ,elk stack ,tutorial ,performance ,performance testing ,log management ,logging ,logs

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}