The focus of Tesora is to provide a scalable Database As A Service platform for OpenStack. The Database Virtualization Engine (DVE) plays a part in this as it aims at allowing databases to scale transparently across multiple MySQL shards.
Some of the features include:
- Transparent Sharding of data accross multiple storage nodes.
- Applications can connect to DVE directly, using the MySQL Protocol, no code changes required.
- Transactional and full ACID compliance with multiple storage nodes.
- Storage Nodes can be added on an existing cluster.
Synthetic benchmarks were run using sysbench on different environments and different DVE architectures, as provided by Tesora. Architectures with 1 and 3 DVE nodes were benchmarked using up to 5 storage nodes.
The environments include a disk-bound dataset, the purpose of which is to create more insight into how large datasets might scale across multiple nodes. This type of use case is a common reason for companies to look into solutions like sharding.
A memory-bound dataset was also benchmarked to find out how DVE performs.
The memory-bound dataset was also used to compare a standalone MySQL Instance with a single DVE node using a single storage node. This provides more insight into the amount of overhead DVE creates at its most basic setup.
The remainder of this blog post gives a general overview on the findings of these benchmarks.
The full report, which includes more information on the configuration and goes deeper in it’s analysis of the results, can be downloaded here (PDF).
OLTP – Disk Bound – Throughput:
OLTP – Disk Bound – Response Time:
More complex transactions that query across multiple shards scale quite well. The explanation for that scalability, which is beyond linear, is that the available memory grows as storage nodes are added. The environment becomes less disk bound and performs better.
SELECT, INSERT, UPDATE – Disk Bound – Throughput:
SELECT, INSERT, UPDATE – Disk Bound – Response Time:
Single row reads and writes scale even better. This demonstrates that sharding has to be tailored towards both the data and the queries that will be executed across those tables.
Especially when using storage nodes with default EBS storage, disk performance is bad which makes the difference even larger.
The good thing about this solution is that storage and DVE nodes can be added, and the capacity of the whole system increases. No application code changes are necessary.
Running such an environment could come at a higher cost, but it could save a lot of resources and thus money on development. Of course, as a sharded architecture such as this are more complex compared to a non sharded architecture, the operational cost should not be ignored.
Memory Bound Throughput:
Memory Bound Response Time:
The overhead of using DVE is noticeable, but much of it is related to the added network hop in the database layer and the CPU requirements of the DVE nodes.
DVE suffers as we go above 64 threads as you can see in the big increase in response time.
OLTP – Disk Bound:
DVE nodes are very CPU intensive. Throughout the analysis, the bottleneck is often caused by the CPU-constrained DVE nodes, this is very visible when looking at the benchmark results of p1_s3 and p1_s5, which use only one DVE node.
Even with simpler single-row, primary key-based SELECT statements, the overhead is noticeable.
Keep in mind that the hardware specifications of the DVE nodes (8 cores) were much higher than the database nodes (2 cores) itself. This makes the issue even more apparent.
Overall DVE looks very promising. When looking at the sysbench results, DVE seems to scale very well, provided that there are enough CPU resources available for the DVE nodes. You can download the full report here (PDF).
It will be interesting to see how much DVE scales on larger instance types. Other types of benchmarks should also be performed, such as linkbench. It is also important to understand the impact on the operational side, such as adding storage nodes, backups & recovery, high availability and how well it works transactions in various scenarios. Stay tuned.
Originally written by Kenny Gryp for MySQL Performance Blog.