Logstash - Quirky 'multiline'
Join the DZone community and get the full member experience.
Join For FreeA couple of days ago, I was forced to reconsider my view regarding ‘multiline’ and I had to revisit the feature. An extract of the input that I used is given below.
ERROR, 2015-04-09 06:08:42, Type: Application, Error message: Failed to execute the command 'UpdateCommand' for table 'CS_Item'; the transaction was rolled back. Ensure that the command syntax is correct. ERROR OCCURRED FOR: at SynchronizeToDCS() method in using block.. BusinessUnit = 223417 and Source = 127.0.0.1 ERROR DETAILS STACK: at Microsoft.Synchronization.Data.ChangeHandlerBase.CheckZombieTransaction(String commandName, String table, Exception ex) at Microsoft.Synchronization.Data.SqlServer.SqlChangeHandler.ExecuteCommand(IDbCommand cmd, DataTable applyTable, DataTable failedRows) at Microsoft.Synchronization.KnowledgeSyncOrchestrator.Synchronize() at Microsoft.Synchronization.SyncOrchestrator.Synchronize() at MyStoreSyncPullService.SynchronizeToDCS(DataRow dr, String SessionId) ERROR SOURCE: Microsoft.SynchronizationThe Problem: Using multiline in the file block
As tried earlier and suggested by multiple forums on the Internet, I tried parsing the data using a ‘multiline’ codec in the ‘file’ block, which was placed in the ‘input’ section of the script. The script worked, but after some hiccups due to the way Logstash handles files on Windows. The ‘file’ block is shown below.
file { path => "application-log.txt" type => "application_log" start_position => "beginning" codec => multiline { pattern => "%{GREEDYDATA:appLogLevel}, %{TIMESTAMP_ISO8601:appTimestamp}, %{GREEDYDATA:appLogDetails}" negate => true what => previous } }
This script block is able to successfully parse data from the said input file. But, if the same multiline is placed inside a ‘tcp’ block, we get unpredictable results. This is because the ‘tcp’ feature does not understand line breaks and breaks data at arbitrary positions (which seems to be determined by buffer size). This results in the input data getting broken at places that cannot be predicted, which creates a problem for the multiline filter.
The Solution: Using multiline in ‘filter’ block
The solution to overcome this problem is to move the multiline feature from the ‘input’ block into the ‘filter’ block. The advantage of placing the ‘multiline’ block in the ‘filter’ section of the script is that it works for file input as well as TCP/IP input. While making this change, it is important to note that ‘multiline’ is a ‘codec’ in the ‘file’ block of the ‘input’ section, while it is a block by itself in the ‘filter’ section.
Parsing the data
Once multiple lines of data have been merged by the ‘multiline’ block, we need to follow it up by a suitable ‘grok’ block, which will allow us to parse the data and split it into the relevant fields, for further processing. An extract from a script that illustrates this concept is given below.
multiline { patterns_dir => "./patterns" pattern => " %{CUSTOM_ERROR_LABEL_2}%{SPACE}%{TIMESTAMP_ISO8601:logTimestamp}%{SPACE}%{GREEDYDATA:logDetails}" negate => true what => previous } grok { patterns_dir => "./patterns" match => ["message", "%{CUSTOM_ERROR_LABEL_2}%{SPACE}%{TIMESTAMP_ISO8601:logTimestamp}%{SPACE}%{GREEDYDATA:logDetails}" ] add_field => { "subType" => "error" } }
In Summary
By changing the placement of the ‘multiline’ block and using the proper patterns, I was able to parse application log input that spans multiple physical lines and finally crack this puzzle. But, a problem still remains. How to ensure that the other blocks that follow are not inadvertently concatenated with the current block?
Opinions expressed by DZone contributors are their own.
Comments