Introducing LEQL: BYTES
Introducing LEQL: BYTES
Trends in the volume of Log Data can be interpreted in many ways. How do I proactively receive notifications when there is an anomalous spike or drop in the volume of my log data?
Join the DZone community and get the full member experience.Join For Free
One of the most common questions asked by users of our Log Management platform is “where is my log volume coming from?” There are a number of ways to interpret this question. Do I have a log source that is sending more events than usual? Do I have a log that is sending overly verbose logs messages? Did I have a developer that turned on DEBUG mode and never turned it off? Or of course, the opposite, did I have a source that I expected to send a higher volume but was lower than usual?
Trends in the volume of Log Data can be interpreted in many ways. Yet, the challenge of getting the data easily has historically been limited to simply analyzing a count of logs or subscribing to a weekly report. The goal of this blog is to introduce a new way of analyzing volume trends in your log data and then taking it a step further. How do I proactively receive notifications when there is an anomalous spike or drop in the volume of my log data.
This blog contains 2 parts. First an introduction of the LEQL “Bytes” Function. Second we will walk through how to use LEQL Bytes in conjunction with Anomaly Alerts to receive notifications of spikes or drop in log volumes.
Part 1: calculate(bytes)
The use of this function is really quite simple. First, select the log or logs you want to analyze for volume. Next, Select the time period you would like to analyze your logs for. Then, go to the query builder. In simple mode go to the calculate drop down menu and select bytes. If you are using advanced mode you can type
calculate(bytes) in the query bar. The result will be a table and graph of the volume of the log you selected.
By default Logentries will take the time period you have selected and render a graph with that time period broken down evenly across 10 data points. You can further adjust this to get more granularity by switching the query builder to advanced mode and adding the
timeslice() function to the end of your query.
Here is an example where we used the query
calculate(bytes) timeslice(24) to show volume trends by hour for the last 24 hours worth of data.
Other examples could be
calculate(bytes) timeslice(7) over the period of a week to get a breakdown of log volume for the last week by day.
This can be repeated with multiple logs selected, entire log sets selected, or across the entire account by selecting “all” at the top of the log selector pane.
To sum up, the calculate bytes function allows you to perform a granular exploration of your log data by volume for as long as your retention period will allow you. It has always been possible to do this type of analysis purely on the number of events been sent by log type. Bytes now enables the investigation to be done based on the size of your log events. The bytes function now makes that process simpler and allows customers to do their own exploration in real time any time they want. But how do I know when a spike or drop occurs? The answer: Anomaly Alerts
Setting up an Anomaly Alert That Uses the Bytes Function
This is where we remove the manual investigation from the equation. The trick is determining what is the right frequency to analyze you log volume. Here we will recommend a few best practices and provide examples.
To set up an anomaly alert go to the Tags and Alerts Page, hover over the icon on the left-hand navigation pane, and select “Create Anomaly Alert”
Give your Alert a name. In this case, we went with “ALERT: Log Volume Spike”
Next enter the same query you used to explore your logs in the logview in the query bar:
Scope – this field defines the period for which your fist query will run. This will set the baseline for comparison against the next time period selected. In this case, we want to set up the alert to compare daily log volumes. For this example, we chose 1 day.
Offset: The time the next query will be run. This query will be compared against the baseline you set in the scope field. For this example, we chose 1 day.
Threshold – This is the percentage increase or decrease that you want to be the trigger point for the alert.
Choose Logs – Then choose which logs you would like to apply this alert to. We recommend that you set up one alert for the account as a whole, then apply this as needed to individually noisy or verbose log sets or log files.
Then choose how you would like to receive the alert. We entered an email in this case but many other options are available. In this particular example, we are using an email address that will update a flowdock room with the alert (future blog post on integrating to flowdock coming!)
Finally, configure the report slider to dictate how often you want to receive an email or notification once the conditions of the alert are met.
The capability to have the application notify you of irregularities in your log volumes is a big step towards better understanding the nature of log data in your environment and is an indicator of events that your DevOps, support, product, or security teams might want to look into.
Published at DZone with permission of John Bosch . See the original article here.
Opinions expressed by DZone contributors are their own.