Side load may massively impact your MySQL Performance

DZone 's Guide to

Side load may massively impact your MySQL Performance

· Performance Zone ·
Free Resource

When we’re looking at benchmarks we typically run some stable workload and we run it in isolation – nothing else is happening on the system. This is not however how things happen in real world when we have significant variance in the load and many things can be happening concurrently.

It is very typical to hear complains about MySQL interactive performance – serving simple standard web traffic is drastically impacted when some heavy queries are ran in background or backup is done with mysqldump – a lot more than you would expect from simple resource competition. I finally found some time to look further in this problem and see what can be done to remedy it.

We designed the benchmark the following way – there is a small table (200MB) which completely fits in the Innodb Buffer Pool (512MB). We also have larger table 4GB which does not fit in the buffer pool. We’re running uniform sysbench OLTP on the small table and mysqldump on the second table. First we run tests individually and when concurrently.

In the perfect world what we would like to see is performance is staying about the same when we run tests concurrently because Sysbench should run completely in memory and use a lot of CPU resources but none of disk IO and mysqldump should have relatively little CPU needs and be bound by disk. Also these are just 2 “threads” running on 4 core system so there should be plenty CPU to spare.

We’re using Percona Server 5.5.15 for this test with buffer pool size of 512MB and innodb_flush_method=O_DIRECT

Test Setup:

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_small –oltp-table-size=1100000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock prepare

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_big –oltp-table-size=17600000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock prepare

Running Sysbench and MySQLDump. Note we run them in the loop to see how result stabilizes.

[root@localhost msb_ps_5_5_15]# sysbench –test=oltp –db-driver=mysql –num-threads=1 –max-requests=0 –oltp-dist-type=uniform –max-time=180 –oltp-read-only –mysql-host=localhost –mysql-table-engine=innodb –mysql-db=test –oltp-table-name=md_cache_test_small –oltp-table-size=1100000 –mysql-user=msandbox –mysql-password=msandbox –mysql-socket=/tmp/mysql_sandbox5516.sock run

[root@localhost msb_ps_5_5_15]# time mysqldump –defaults-file=my.sandbox.cnf test md_cache_test_big > /dev/null

Baseline Run:
When we run the tests individually Sysbench gives about 330 req/sec and mysqldump for large table completes in about 95 seconds.
If we run them concurrently after system reaches steady state we get about 2 req/sec and mysqldump takes about 180 seconds.

Yes you get it right. Performance of sysbench OLTP on small table drops more than 150 times when heavy mysqldump is running concurrently. mysqldump itself also slows down
about 2x.

What is going on here ? To understand it we should take a look at the buffer pool contents.

mysql [localhost] {msandbox} (information_schema) > select concat_ws(‘.’, t.schema, t.name, i.name) as index_name, sum(data_size)/1024/1024 as data_size_mb from innodb_sys_tables as t inner join innodb_sys_indexes as i using(table_id) inner join innodb_buffer_pool_pages_index as p using(index_id) where t.schema=’test’ group by i.index_id \G




When we’re running sysbench OLTP on its own we have the primary key of the table fit completely in the buffer pool. However when mysqldump is ran concurrently it reads so many pages from the disk it pushes out most of the smaller table from the buffer pool with only 12MB remaining. This makes workload extremely IO bound hence such drop in performance.

The performance of mysqldump is impacted too because we now have 2 threads competing for what is single hard drive on this test system.

It is worth to note MySQL actually uses midpoint insertion for its buffer pool replacement policy . Unfortunately by default it is configured in a way it is quite useless. The blocks are indeed first placed in the head of “old” sublist which mean they should not push any hot data which is in “young” sublist. However when you’re doing mysqldump (or running some complex batch job query) you are likely going to have multiple accesses to the data on the same page before being done with it for good. Because there are several accesses page really gets immediately moved to the young sublist and as such placing high pressure on buffer pool.

There is ingenious feature though to deal with this problem, it is just you have to enable it separately. There is a variable innodb_old_blocks_time which specifies amount of milliseconds which needs to pass before table can be moved to the young sublist. In typical cases like mysqldump all accesses to the majority of pages will be concentrated within very small period of time so setting innodb_old_blocks_time variable to some value will prevent important data to be pushed out of buffer pool.

Lets repeat the benchmark with innodb_old_blocks_time=1000 which will correspond to 1 sec.

Separate Sysbench gives about 330 req/sec and mysqldump about 95 seconds which is the same. Note we ran test on virtualized system in this case so we would not be able to measure small variances in performance reliably.

Running Sysbench and MySQLDump convurrently gives about 325 req/sec for sysbench and some 100 seconds for mysqldump which is a dramatic improvement of over
150x for sysbench and results now going inline with what you would expect.

Lets see what is going on with buffer pool contents:



As you can see now the small table PRIMARY KEY (which is what used by benchmark) is not pushed from buffer pool at all.

For advanced tuning you might also look into changing how buffer pool is split into young and old sublists via innodb_old_blocks_pct variable though we did not need to do it in this case.

I’m not sure if there are any bad side effects from setting innodb_old_blocks_time to non zero value, if not I would strongly suggest changing default from zero in MySQL 5.6 as it would offer much better “out of box” user experience.

As we can see in default configuration MySQL has buffer pool which can be easily washed away by large table scans or heavy batch jobs. If this happen the workload which is normally in memory becomes disk IO bound which can slow it down more than 100 times. The solution is rather easy though. Setting innodb_old_blocks_time to 1000 or other meaningful number is an easy remedy for this problem.

I want to thank Ovais Tariq for doing a lot of heavy lifting running benchmarks for this post.


Published at DZone with permission of Peter Zaitsev , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}