Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Lock, Stock, and MySQL Backups [Q+A]

DZone's Guide to

Lock, Stock, and MySQL Backups [Q+A]

After our webinar on MySQL backups, we had a number of interesting questions. Here, we've answered some of the most interesting.

· Database Zone
Free Resource

Read why times series is the fastest growing database category.

Hello again! On August 16, we delivered a webinar on MySQL backups. As always, we've had a number of interesting questions. Some of them we've answered on the webinar, but we'd like to share some of them here in writing.

Q: What is the best way to maintain daily full backups, but selective restores omitting certain archive tables?

A: There are several ways this can be done, listed below (though not necessarily limited to the following):

  1. Using logical dumps (i.e. mydumper, mysqldump, mysqlpump). This allows you to dump per table and thus be able to selectively restore.
  2. Back up the important tables and archive tables separately, allowing you to restore separately, as well. This is a better approach, in my opinion, since if the archive tables do not change often, you can back up only what has changed. This gives you more flexibility in backup size and speed. This is also possible if consistency or inter-dependence between the archive and other tables aren't necessary.
  3. Filesystem- or XtraBackup-based backups are also another option. However, the restore process means you need to restore the full backup and discard what you do not need. This is especially important if your archive tables are using InnoDB (where metadata is stored in the main tablespace).

Q: Can you recommend a good script on GitHub for mysqlbinlog backup?

A: This is a shameless plug, but I would recommend the tool I wrote called pyxbackup. At the time it was written, binary log streaming with 5.6 was fairly new, so there weren't many tools that we could find or adopt that would closely integrate with backups — hence writing from scratch.

Q: mysqlbinlog can stream binary logs to a remote server. Doesn't simply copying the binlog to the remote location just as effective — especially if done frequently using a cronjob that runs rsync?

True, though be aware of a few differences:

  1. rsync may not capture data that would have been flushed to disk from the filesystem cache.
  2. If the source crashes, you could lose the last binary log(s) between the last rsync and the crash.

Q: How is possible to create a backup using xtrabackup compressed directly to a volume with low capacity (considering that it's needed to use the -apply-log step).

A: In the context of this question, we cannot stream backups for compression and do the apply-log phase at the same time. The backup needs to be complete for the apply-log phase to start — hence compress, decompress, then apply-log. Make sure enough disk space is available for the dataset size, plus your backups if you want to be able to test your backups with apply-log.

Q: How can you keep connection credentials secure for automated backup?

A: Two things...

  1. Tools like xtrabackup, mysqldump, mydumper, and mysqlpump have options to pass client defaults file. You can store credentials in those files that are restricted to only a few users on the system (including the backup user).
  2. Aside from the first item, most of the tools also support login paths if you do not want your credentials on a plain text file. It is not completely secure, as credentials from login paths can still be decoded.

Q: Which one is faster between mydumper and 5.7 mysqlpump?

A: This is an interesting question; it belongs to the "it depends" category! First, we have not benchmarked these two tools head-to-head. Second, with different approaches, one may be faster on a specific use case, while the other is faster on a different use case. For example, with the different lock granularity support on mydumper, it could be faster on InnoDB with only high-concurrent workloads.

Q: If we wanted to migrate a 2.5TB database over a VPN connection, which backup and restore method would you recommend? The method would need to be resilient. This would be for migrating an on-premise database to a MySQL RDS instance at AWS.

A: Again, there could be a number of ways this might be achieved, but one we frequently go with is:

  1. Set up an EC2 instance that would replicate from the original source.
  2. Once the replication is caught up, stop replication and do a parallel dump of the data per table.
  3. Import the data to RDS per table where you can monitor progress and failure, and retry each table if necessary (Hint: mydumper can also chunk).
  4. Once complete, configure RDS to replicate from EC2 to complete its data.

Bonus: If you are migrating to Aurora, do you know you can use an XtraBackup-based backup directly?

Q: What about if I have 1TB of data to backup and restore to a new server, how much time does it take, can we restore/stream at the same time while taking a backup?

A: Assuming you have direct access to the source server, XtraBackup is an excellent option here. Backup from the source then streams to the new server. Once complete, prepare the backup on the new server and it should be ready for use. These instructions are mostly for provisioning new slaves, but most of the steps should apply for the same outcome.

Q: Is mydumper your product, and how fast will it take to backup a few millions of data?

A: No, mydumper is not official Percona software. Percona contributes to this software as it both benefits our customers and the community.

Q: Will it lock my table during the process? How does one restore the mydumper?

A: By default, the table will be locked. However, this is highly configurable. For example, if you are using a version Percona Server for MySQL that supports Backup Locks, the lock time is significantly reduced. Additionally, depending on the backup requirements you can skip locks altogether.

mydumper comes with a complementary tool called myloader that does the opposite. It restores the resulting dumps into the destination server in parallel.

Thank you again for attending the webinar. If you were not able to make it, you could still watch the recording and the slides here.

By the way, if you are attending Percona Live in Europe, Marcelo's talk on continuous backup is an excellent follow-up to this webinar!

Learn how to get 20x more performance than Elastic by moving to a Time Series database.

Topics:
mysql ,database ,mysql database ,lock ,stock

Published at DZone with permission of Jervin Real, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}