Focus on Detection: Prometheus and the Case for Time Series Analysis
Prometheus, a time series database, can help improve detection and incident management through time series analysis. See how it works on disk utilization.
Join the DZone community and get the full member experience.
Join For FreeDetection, in the incident lifecycle, is the observation of a metric, at certain intervals, and the comparison of that observation against an expected value. Monitoring systems then trigger notifications and alerts based on the observation of those metrics.
For many teams, on-call is primarily about detection. Monitor everything and make sure we don’t miss out! In organizations with legacy monitoring configurations, getting better at detection is tough. Environments are configured with broadly applied, arbitrarily set thresholds. Sometimes this is due to limitations in the monitoring solution, or the complexity of implementing different thresholds for different measurements. Sometimes it’s a simple reflection that detection was not an area of focus before. Whatever the reason, the impact on on-call teams is measurable: too many false alerts, too many interruptions, and acute alert fatigue.
Teams that measure high on the Incident Management Assessment are focusing on time series analysis in their monitoring and detection systems. Prometheus, as one popular example of a solution using a time series database, is seeing wide adoption in both new projects and within existing environments.
Getting your head around time series can seem daunting. For individuals with years between themselves and their last statistics class, understanding the myriad of options is a barrier. While advanced functionality in these systems requires some thinking, there are plenty of easy use cases to explore in making the case for this type of detection.
Cleaning Up the Disk: A Staple of System Administration for 60 Years
I have no data to support this assertion, but I suspect if I bucketed all the alerts I’ve received in my career by type, disk utilization would be the winner in a landslide. It’s the easiest system metric to understand but can have wide-reaching consequences if in an unhappy state. It’s hard to imagine an environment where volume utilization is not monitored by default on every host. While “disk full” seems like a simple discussion, unboxing it reveals the complexity that every team faces when considering detection methods.
If we can all agree that full disks are bad, we can still have a lively debate on which precursors to FULL we may want to detect and alert on. At what threshold should a team member become involved? The number of TB free? GB? MB? A percentage of the total disk? What if this host is part of a fleet of servers, and losing it is not significant?
A standard approach here is to send a WARNING level alert at 85% used (15% free) and a CRITICAL at 90% used. The thinking is that with only 10% of the volume free, someone should do something! Why? If it took us 3 years to eat up that 90%, is there any reason to believe we’ll chew through the remaining terabyte in the next 10 minutes?
The Reference System
Let’s map this discussion to a basic system we can all imagine: 4 core, 4GB RAM, and two volumes, 2GB and 10GB (operating system and application, respectively). For this example, it doesn’t really matter if this is a container, a physical host, or a cloud instance. I’m using Prometheus and the Node Exporter to gather and expose metrics, with Grafana on top for the visualizations.
The Trickle
One would expect volume utilization on the OS volume to be a relatively steady state. Other than logs, little change is introduced here. Depending on the application, the 10GB volume may also be pretty flat, or it may get a lot of use. Here we’ll consider a steady, if small, increase of utilization on that volume. As you can see below, the volume is just creeping its way up to full.
The standard approach would send a WARNING right here–we’ve hit that generic 85% utilization threshold. What should someone actually do about it, though?
Predictive Analytics
Using time series data, we can start to apply something approximating prediction to our detection efforts. Using the same data above, we can compute the time to disk full, given the current rate of change with the deriv function in Prometheus:
(node_filesystem_size – node_filesystem_free) / deriv(node_filesystem_free[3d]) > 0
At the current rate of consumption, we have 24 weeks to action this condition. It's probably okay to not fire an alert just yet. This isn’t even informational; no real change is detected in the state of the system.
The Flood
Let’s consider a different scenario– the same volume, but with far more available space. Starting with about 25% utilization, we see this volume has a relatively steady rate of consumption:
Until something changes:
Given the historical data, it is very unexpected for this volume to see that kind of spike in utilization. If we focus on the rate of change over time, we see the full story:
The standard threshold of 85% utilization will not be triggered… and so a team remains blind to the fact that the rate of change just exceeded historical expectations.
Is that actionable? Perhaps, perhaps not, but it is certainly more significant than the trickle scenario, which is firing alerts, interrupting teams, with no real situation requiring investigation.
All the Data
This is a simple example focusing on a single detectable metric. How does this kind of approach scale to all the actual metrics your team may wish to track? Really, really well, as it turns out. The default behavior of Prometheus exporters is to expose everything – and I really mean everything – for gathering by the Prometheus collector. Out of the box, the Prometheus Node Exporter is tracking ~620 discrete measurements on my test Linux instance.
This is where these systems really differentiate themselves from the previous generation of detection systems: they default to gathering all the metrics and alerting on none of them. This is in stark contrast to the default behavior of, say, Nagios: gather few measurements, store none, and alert on all.
Actionable Intelligence
Prometheus, and other time series database systems, bring a new kind of insight to the detection phase of the incident lifecycle. They empower teams with more observable data than ever before, without hampering a team’s ability to dig in and understand any one of those metrics. With advanced grouping features, teams can understand these metrics as they relate to different classes of system or application.
Using time series analysis, a team can completely rewrite the way detection works in their practice. Bringing better fidelity to the measurements, and more reliably actionable alerting when necessary. This can materially change the game for anyone trying to reduce MTTR and get more sleep.
VictorOps integrates with Prometheus. Check out the integration guide to get started.
Published at DZone with permission of Matthew Boeckman, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments