Over a million developers have joined DZone.

Avoid the Most Common Database Performance Monitoring Mistakes

DZone 's Guide to

Avoid the Most Common Database Performance Monitoring Mistakes

The metrics built into modern databases and development tools allow you to zero in on the source of system slowdowns using simple counters and basic math operations.

· Performance Zone ·
Free Resource

Finding the causes of slow queries and other operations in a database begins with knowing where to find and how to collect the performance data you need to analyze in order to find a solution. The metrics built into MySQL and other modern databases and development tools allow you to zero in on the source of system slowdowns using simple counters and basic mathematical operations.

If a database's poor performance has you scratching your head, start your troubleshooting by understanding the causes of system slow-downs, and by avoiding the most common performance metrics mistakes.

Measuring response time in any query processing system depends on Little's Law, which is key to recording and reporting response times in multithreading systems of all types. In an April 2011 post on Percona's MySQL Performance blog, Baron Schwartz explains how Little's Law applies to MySQL. Schwartz uses the example of creating two counters to measure a system's busy time and then adding a third counter to measure weighted busy time, or times during which more than one query is processing simultaneously.

The busy time example highlights the four fundamental metrics: observation interval; the number of queries per interval; total active-query time (busy time); and total execution time of all queries (weighted busy time). Combining these with Little's Law creates four long-term average metrics: throughput, execution time, concurrency, and utilization.

With these metrics, you can use the Universal Scalability Law to model scalability. The metrics can also be used for queueing analysis and capacity planning — all with simple addition, subtraction, multiplication, and division of numbers collected by the counters.

A MySQL Performance Monitor Primer

The MySQL Reference Manual explains how to use the Benchmark() function to measure the speed of a specific expression or function. The approximate time required for the statement to execute is displayed below the return value, which in this case will always be zero.

MySQL's Benchmark() function can be used to display a statement's approximate execution time. Source: MySQL Manual.

The MySQL Performance Schema is intended for monitoring MySQL Server execution at a low level by inspecting the internal execution of the server at runtime. All server events are monitored; this includes anything that takes time and that is designed to allow timing information to be collected. For example, by examining the events_waits_current table in the performance_schema database, you get a snapshot of what the server is doing right now.

The events_waits_current table can be examined in the performance_schema database to show the server's current activity. Source: MySQL Manual.

The MySQL Reference Manual provides a Performance Schema Quick-Start Guide and a section on Using the Performance Schema to Diagnose Problems.

Ensuring the Accuracy of Your Metrics Data

Any troubleshooting approach relies first and foremost on the accuracy of the performance data being collected. Azul Systems CTO Gil Tene explained at an Oracle OpenWorld 2013 conference session that how you measure latency in a system is as important as what you measure. TheServerSide.com's Maxine Giza reports on Tene's OpenWorld presentation in a September 2013 post.

For example, response time may report a single instance rather than a value aggregated over time to represent both peak and low demand. Conversely, the database's metrics may not record or report critical latencies and may depend instead on mean and standard deviation measures. This causes you to lose the data that's most useful in addressing the latency.

Another mistake many DBAs make, according to Tene, is to run load-generator tests in ways that don't represent real-world conditions, so important results are omitted. Tene states that the coordinated omission in results is "significant" and leads to bad reporting. He recommends that organizations establish latency-behavior requirements and pass/fail criteria to avoid wasting time and resources.

performance ,database performance ,monitoring ,mysql

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}