Over a million developers have joined DZone.

Reality of Innodb Caching

· Performance Zone

See Gartner’s latest research on the application performance monitoring landscape and how APM suites are becoming more and more critical to the business, brought to you in partnership with AppDynamics.

I have mentioned a few times that Innodb caches data in pages and even if you have a working set consisting of relatively few rows your working set in terms of pages can be rather large.

Now I came to do a little benchmark to show it in practice. I’m using standard “sbtest” with 10mil rows with data file of 2247098368 which gives us 224 bytes of gross storage per row, including all overhead etc. Actual row
size in this table is smaller but lets use this number for our math. For benchmark I’m using set number of random IDs which are repeatedly selected in random order, which would illustrate the data set with
some randomly distributed “hot” rows. I read every row in the set once before timing, so when there is enough memory to cache every single row there should not be any disk reads in benchmark run itself.

I’m using 128M buffer pool for this test, which should fit roughly 500K of rows 224 bytes in size. Lets see what Benchmark really shows:
Testing 100 out of 10000000 keys  24.79 seconds,  4034.53 lookups per second
Testing 200 out of 10000000 keys  25.66 seconds,  3896.96 lookups per second
Testing 400 out of 10000000 keys  25.03 seconds,  3995.65 lookups per second
Testing 800 out of 10000000 keys  24.40 seconds,  4097.73 lookups per second
Testing 1600 out of 10000000 keys  24.83 seconds,  4026.59 lookups per second
Testing 3200 out of 10000000 keys  25.47 seconds,  3926.65 lookups per second
Testing 6400 out of 10000000 keys  26.51 seconds,  3771.73 lookups per second
Testing 12800 out of 10000000 keys  386.20 seconds,  258.94 lookups per second
Testing 25600 out of 10000000 keys  640.12 seconds,  156.22 lookups per second
Testing 51200 out of 10000000 keys  775.38 seconds,  128.97 lookups per second
Testing 102400 out of 10000000 keys  841.65 seconds,  118.81 lookups per second

As we see in this case database can really fit only somewhere between 6400 and 12800 different rows which is about 1/50 of “projected size”. This number is very close to what I would have estimated –
With 224 bytes per row we have some 70 rows per page so with random distribution you would expect up to 70 times data which have to be fetched to the database than you need.

I’m wondering if any over storage engine can show better results in such benchmark. Falcon with plans for row cache would fair better, so I would expect better results with PBXT. I also should check with
smaller page sizes available in Percona Server and my expectation is with 4K page size I can fit 4x more distinct rows in my cache.


The Performance Zone is brought to you in partnership with AppDynamics.  See Gartner’s latest research on the application performance monitoring landscape and how APM suites are becoming more and more critical to the business.


Published at DZone with permission of Peter Zaitsev, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}