Separating the signal from the noise is one of the biggest challenges when dealing with machine-generated log data today and has generally required deep technical expertise. However once you find that signal it can be massively useful and can help you make business decisions with a big impact. Today Logentries is announcing log level usage reports, which is one more way we are striving to do the hard work so you don’t have to!
At first glance, log-level usage reports e might seem quite rudimentary insofar as they are weekly reports of the log volumes you send to Logentries. However, the reports can give you an intelligent breakdown of your log volumes by application, server, process as well as at the individual log level.The usage reports are designed to provide IT and Dev Ops teams with a weekly snapshot of their system activity broken down at a very fine-grained log level giving the ability to see interesting trends in system behavior. In fact, so many of our customers have been asking our new Logentries Data Insights team to run these reports that we’ve decided to make them widely available. The reports are part of a new service being produced by this team comprised of PhDs and applied research scientists in distributed systems, and they are supported by Logentries proprietary machine learning and community analytics technology.
Over the past few months we have used these reports to help a large number of our customers to investigate spikes in system load, potential issues as well as to understand business trends and how they relate to their systems – and might I add with phenomenal results – based on some really simple analysis. I suppose it doesn’t need to be complex to be valuable…
But don’t take my word for it…according to Rich Archbold, director of Ops at Intercom
“The new Logentries Log Analytics Reporting service has provided us with insight into our environment that we would never have been able to get before. We now have deep, log-level visibility into application activity, trends, and potential issues on a weekly basis delivered right into our inbox!”
Specifically the usage reports will provide the following:
- Total number of logs and log sets
- Total log volume across all logs
- Top 20 logs by volume
- Daily breakdown of log volume per log
- List of inactive logs
Sample Usage Report
The new reports can be especially useful for:
- Logging cost analysis: When your log volumes increase significantly you usually want to know why. Looking at your total log volumes will generally not give enough insight to allow you to figure out what has caused the increase or from what part of your system the increase is coming from. However by getting a breakdown of your top 20 logs by volume you can immediately see the log or logs responsible, allowing you to take the next step in determining as to whether this was a result of increased system load or whether this was related to a system issue or rogue process.
- System load analysis: Log volumes are a great indicator for understanding system load. As load increases so too do your log volumes. Understanding what log(s) are resulting in the increased load will give you better visibility into what part of your system is actually seeing the increased workloads.
- Troubleshooting: The most common cause of a sudden spike in log volumes is generally not sudden increased load on a system, but unfortunately is more often a result of a misconfiguration or application bug that starts to produce cascading errors. A sudden spike from a specific log is often an indicator of issues emanating from a particular application component. Similarly if logs suddenly become inactive, it may signify something has gone horribly wrong.
- Infrastructure usage analysis: When your cloud bills start to grow, it is common for CFOs and IT leads to sit down weekly/monthly to understand what is driving this. This has been notoriously difficult in the past and has resulted in the rise of companies like cloudability and cloudtechhealth for example. Looking at your cloud costs breakdown along side log level usage data can help you correlate and understand some of the drivers behind these costs, especially if there has been significant change in your cloud cost structure.