DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

How are you handling the data revolution? We want your take on what's real, what's hype, and what's next in the world of data engineering.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

SBOMs are essential to circumventing software supply chain attacks, and they provide visibility into various software components.

Related

  • Master SQL Performance Optimization: Step-by-Step Techniques With Case Studies
  • Useful System Table Queries in Relational Databases
  • Automatic Versioning in Mobile Apps
  • How to Repair Corrupt MySQL Database Tables Step-by-Step

Trending

  • Vibe Coding: Conversational Software Development — Part 1 Introduction
  • Docker Model Runner: A Game Changer in Local AI Development (C# Developer Perspective)
  • Lessons Learned in Test-Driven Development
  • Streamlining DevOps: How Containers and Kubernetes Deliver
  1. DZone
  2. Data Engineering
  3. Databases
  4. Profiling MySQL Queries from Performance Schema

Profiling MySQL Queries from Performance Schema

By 
Peter Zaitsev user avatar
Peter Zaitsev
·
Apr. 18, 15 · Interview
Likes (0)
Comment
Save
Tweet
Share
7.8K Views

Join the DZone community and get the full member experience.

Join For Free

[This article was written by Jarvin Real]

When optimizing queries and investigating performance issues, MySQL comes with built in support for profiling queries aka SET profiling=1; . This is already awesome and simple to use, but why the PERFORMANCE_SCHEMA alternative?

Because profiling will be removed soon (already deprecated on MySQL 5.6 ad 5.7); the built-in profiling capability can only be enabled per session. This means that you cannot capture profiling information for queries running from other connections. If you are using Percona Server, the profiling option forlog_slow_verbosity is a nice alternative, unfortunately, not everyone is using Percona Server.

Now, for a quick demo: I execute a simple query and profile it below. Note that all of these commands are executed from a single session to my test instance.

mysql> SHOW PROFILES;
+----------+------------+----------------------------------------+
| Query_ID | Duration   | Query                                  |
+----------+------------+----------------------------------------+
|        1 | 0.00011150 | SELECT * FROM sysbench.sbtest1 LIMIT 1 |
+----------+------------+----------------------------------------+
1 row in set, 1 warning (0.00 sec)
mysql> SHOW PROFILE SOURCE FOR QUERY 1;
+----------------------+----------+-----------------------+------------------+-------------+
| Status               | Duration | Source_function       | Source_file      | Source_line |
+----------------------+----------+-----------------------+------------------+-------------+
| starting             | 0.000017 | NULL                  | NULL             |        NULL |
| checking permissions | 0.000003 | check_access          | sql_parse.cc     |        5797 |
| Opening tables       | 0.000021 | open_tables           | sql_base.cc      |        5156 |
| init                 | 0.000009 | mysql_prepare_select  | sql_select.cc    |        1050 |
| System lock          | 0.000005 | mysql_lock_tables     | lock.cc          |         306 |
| optimizing           | 0.000002 | optimize              | sql_optimizer.cc |         138 |
| statistics           | 0.000006 | optimize              | sql_optimizer.cc |         381 |
| preparing            | 0.000005 | optimize              | sql_optimizer.cc |         504 |
| executing            | 0.000001 | exec                  | sql_executor.cc  |         110 |
| Sending data         | 0.000025 | exec                  | sql_executor.cc  |         190 |
| end                  | 0.000002 | mysql_execute_select  | sql_select.cc    |        1105 |
| query end            | 0.000003 | mysql_execute_command | sql_parse.cc     |        5465 |
| closing tables       | 0.000004 | mysql_execute_command | sql_parse.cc     |        5544 |
| freeing items        | 0.000005 | mysql_parse           | sql_parse.cc     |        6969 |
| cleaning up          | 0.000006 | dispatch_command      | sql_parse.cc     |        1874 |
+----------------------+----------+-----------------------+------------------+-------------+
15 rows in set, 1 warning (0.00 sec)

To demonstrate how we can achieve the same with Performance Schema, we first identify our current connection id. In the real world, you might want to get the connection/processlist id of the thread you want to watch i.e. from SHOW PROCESSLIST .

mysql> SELECT THREAD_ID INTO @my_thread_id
    -> FROM threads WHERE PROCESSLIST_ID = CONNECTION_ID();
Query OK, 1 row affected (0.00 sec)

Next, we identify the bounding EVENT_IDs for the statement stages. We will look for the statement we wanted to profile using the query below from the events_statements_history_long table. Your LIMIT clause may vary depending on how much queries the server might be getting.

mysql> SELECT THREAD_ID, EVENT_ID, END_EVENT_ID, SQL_TEXT, NESTING_EVENT_ID
    -> FROM events_statements_history_long
    -> WHERE THREAD_ID = @my_thread_id
    ->   AND EVENT_NAME = 'statement/sql/select'
    -> ORDER BY EVENT_ID DESC LIMIT 3 G
*************************** 1. row ***************************
       THREAD_ID: 13848
        EVENT_ID: 419
    END_EVENT_ID: 434
        SQL_TEXT: SELECT THREAD_ID INTO @my_thread_id
FROM threads WHERE PROCESSLIST_ID = CONNECTION_ID()
NESTING_EVENT_ID: NULL
*************************** 2. row ***************************
       THREAD_ID: 13848
        EVENT_ID: 374
    END_EVENT_ID: 392
        SQL_TEXT: SELECT * FROM sysbench.sbtest1 LIMIT 1
NESTING_EVENT_ID: NULL
*************************** 3. row ***************************
       THREAD_ID: 13848
        EVENT_ID: 353
    END_EVENT_ID: 364
        SQL_TEXT: select @@version_comment limit 1
NESTING_EVENT_ID: NULL
3 rows in set (0.02 sec)

From the results above, we are mostly interested with the EVENT_ID and END_EVENT_ID values from the second row, this will give us the stage events of this particular query from the events_stages_history_long table.

mysql> SELECT EVENT_NAME, SOURCE, (TIMER_END-TIMER_START)/1000000000 as 'DURATION (ms)'
    -> FROM events_stages_history_long
    -> WHERE THREAD_ID = @my_thread_id AND EVENT_ID BETWEEN 374 AND 392;
+--------------------------------+----------------------+---------------+
| EVENT_NAME                     | SOURCE               | DURATION (ms) |
+--------------------------------+----------------------+---------------+
| stage/sql/init                 | mysqld.cc:998        |        0.0214 |
| stage/sql/checking permissions | sql_parse.cc:5797    |        0.0023 |
| stage/sql/Opening tables       | sql_base.cc:5156     |        0.0205 |
| stage/sql/init                 | sql_select.cc:1050   |        0.0089 |
| stage/sql/System lock          | lock.cc:306          |        0.0047 |
| stage/sql/optimizing           | sql_optimizer.cc:138 |        0.0016 |
| stage/sql/statistics           | sql_optimizer.cc:381 |        0.0058 |
| stage/sql/preparing            | sql_optimizer.cc:504 |        0.0044 |
| stage/sql/executing            | sql_executor.cc:110  |        0.0008 |
| stage/sql/Sending data         | sql_executor.cc:190  |        0.0251 |
| stage/sql/end                  | sql_select.cc:1105   |        0.0017 |
| stage/sql/query end            | sql_parse.cc:5465    |        0.0031 |
| stage/sql/closing tables       | sql_parse.cc:5544    |        0.0037 |
| stage/sql/freeing items        | sql_parse.cc:6969    |        0.0056 |
| stage/sql/cleaning up          | sql_parse.cc:1874    |        0.0006 |
+--------------------------------+----------------------+---------------+
15 rows in set (0.01 sec)

As you can see the results are pretty close, not exactly the same but close. SHOW PROFILE shows Duration in seconds, while the results above is in milliseconds.

Some limitations to this method though:

  • As we’ve seen it takes a few hoops to dish out the information we need. Because we have to identify the statement we have to profile manually, this procedure may not be easy to port into tools like the sys schema or pstop.
  • Only possible if Performance Schema is enabled (by default its enabled since MySQL 5.6.6, yay!)
  • Does not cover all metrics compared to the native profiling i.e. CONTEXT SWITCHES, BLOCK IO, SWAPS
  • Depending on how busy the server you are running the tests, the sizes of the history tables may be too small, as such you either have to increase or loose the history to early i.e.performance_schema_events_stages_history_long_size variable. Using ps_history might help in this case though with a little modification to the queries.
  • The resulting Duration per event may vary, I would think this may be due to the additional as described on performance_timers table. In any case we hope to get this cleared up as result whenthis bug is fixed.
Database Schema MySQL

Published at DZone with permission of Peter Zaitsev, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Master SQL Performance Optimization: Step-by-Step Techniques With Case Studies
  • Useful System Table Queries in Relational Databases
  • Automatic Versioning in Mobile Apps
  • How to Repair Corrupt MySQL Database Tables Step-by-Step

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: