Performance Evaluation of SST Data Transfer Without Encryption: Part I
We evaluate the current state of xtrabackup, the most advanced method of State Snapshot Transfer that is used to provision the joining node with all the necessary data.
Join the DZone community and get the full member experience.
Join For FreeIn this blog, we’ll look at evaluating the performance of an SST data transfer without encryption.
A State Snapshot Transfer (SST) operation is an important part of Percona XtraDB Cluster. It’s used to provision the joining node with all the necessary data. There are three methods of SST operation available: mysqldump
, rsync
, and xtrabackup
. The most advanced one, xtrabackup
, is the default method for SST in Percona XtraDB Cluster.
We decided to evaluate the current state of xtrabackup
, focusing on the process of transferring data between the donor and joiner nodes tp find out if there is any room for improvements or optimizations.
Taking into account that the security of the network connections used for Percona XtraDB Cluster deployment is one of the most important factors that affects SST performance, we will evaluate SST operations in two setups: without network encryption, and in a secure environment.
In this post, we will take a look at the setup without network encryption.
Here's the setup:
Database server: Percona XtraDB Cluster 5.7 on the donor node.
Database: Sysbench database — 100 tables, 4M rows each (total ~122GB).
Network: Donor/joiner hosts are connected with dedicated 10Gbit LAN.
Hardware: Donor/joiner hosts — boxes with 28 Cores+HT/RAM 256GB/Samsung SSD 850/Ubuntu 16.04.
In our test, we will measure the amount of time it takes to stream all necessary data from the donor to the joiner with the help of one of SST’s methods.
Before testing, I measured read/write bandwidth limits of the attached SSD drives (with the help of sysbench/fileio); they are ~530-540MB/sec. That means that the best theoretical time to transfer all of our database files (122GB) is ~230sec.
Schematic View of SST Methods
Check out some of the main SST methods.
Streaming DB Files From the Donor to Joiner With tar
(donor) tar | socat socat | tar (joiner)
tar
is not really an SST method. It’s used here just to get some baseline numbers to understand how long it takes to transfer data without extra overhead.
Streaming DB Files From the Donor to Joiner With rsync Protocol
(donor) rsync rsync(daemon mode) (joiner)
While working on the testing of the rsync
SST method, I found that the current way of data streaming is quite inefficient: rsync
parallelization is directory-based, not file-based. So, if you have three directories — for instance, sbtest
(100files/100GB), mysql
(75files/10MB), performance_schema
(88files/1M) — the rsync
SST script will start three rsync
processes, where each process will handle its own directory. As a result, instead of parallel transfer we end up with one stream that only streams the largest directory (sbtest). Replacing that approach with one that iterates over all files in datadir
and queues them to rsync
workers allows us to speed up the transfer of data two to three times. On the charts, rsync
is the current approach and rsync_improved
is the improved one.
Back Up Data on Donor Side and Stream It to Joiner in xbstream Format
(donor) xtrabackup | socat socat | xbstream (joiner)
At the end of this post, you will find the command lines used for testing each SST method.
Streaming of our database files with tar
took a minimal amount of time, and it’s very close to the best possible time (~230sec). xtrabackup
is slower (~2x), as is rsync
(~3x).
Issues
From profiling xtrabackup
, we can clearly see two things:
- IO utilization is quite low.
- A notable amount of time was spent in crc32 computation.
Issue 1
xtrabackup
can process data in parallel. However, by default, it does so with a single thread only. Our tests showed that increasing the number of parallel threads to 2/4 with the --parallel
option allows us to improve IO utilization and reduce streaming time. One can pass this option to xtrabackup
by adding the following to the [sst]
section of my.cnf
:
[sst] inno-backup-opts="--parallel=4"
Issue 2
By default, xtrabackup
uses software-based crc32 functions from the libz library. Replacing this function with a hardware-optimized one allows a notable reduction in CPU usage and a speedup in data transfer. This fix will be included in the next release of xtrabackup.
We ran more tests for xtrabackup
with the parallel option and hardware optimized crc32, and got results that confirm our analysis. The streaming time for xtrabackup
is now very close to baseline and storage limits.
Testing Details
For the purposes of testing, I’ve created a script (sst-bench.sh) that covers all the methods used in this post. You can try to measure all the above SST methods in your environment. In order to run the script, you have to adjust several environment variables in the beginning — such as joiner ip
, the datadirs
location on the joiner and donor hosts, etc. After that, put the script to the donor
and joiner
hosts and run it as the following:
#joiner_host> sst_bench.sh --mode=joiner --sst-mode=<tar|xbackup|rsync>
#donor_host> sst_bench.sh --mode=donor --sst-mode=<tar|xbackup|rsync|rsync_improved>
Published at DZone with permission of Alexey Stroganov, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments