As more “fast” storage technologies (such as SSD and NVMe SSD) emerge, organizations with big data use cases want to make better use of them to achieve better throughput and latency. But to this point, there have been no detailed analyses published about the true significance of that performance boost, nor about how to best mix fast and “slow” storage to achieve the best balance between performance and cost.
Recently, software engineers in Intel’s Software Solution Group group did a detailed study of Apache HBase write performance on different storage media. (Results were originally published via the ASF Blog.) In the study, we used the hierarchy storage management support in HDFS, using YCSB as the benchmark, to store different categories of HBase data on three different storage types: HDD, SSD, and RAMDISK. (HDD is the most popular storage in current use, SATA SSD is faster storage that is growing in popularity, and RAMDISK was used to emulate extremely high-performance PCIe SSDs.)
In general, this study tries to answer following questions:
- What is the maximum performance a user can achieve by using fast storage?
- Where are the bottlenecks?
- What is the best balance between performance and cost, and how can it be achieved?
- How can the performance of a cluster with different storage combinations be predicted?
We believe that this study provides the first comprehensive and objective analysis of HBase performance on fast storage technology.
Download the full report here.
Jingcheng Du and Wei Zhou are Software Engineers at Intel, and HBase contributors.