Backup and Restore Challenges: Cassandra Compaction

DZone 's Guide to

Backup and Restore Challenges: Cassandra Compaction

Learn more about how compaction can help your storage and performance in Cassandra and the impact it has on data backup and restoration.

· Database Zone ·
Free Resource

In this article, I will cover the technical challenges users may face doing backup and restore of next-gen databases (such as Apache Cassandra) that support data compaction. I will start with an introduction to compaction, then describe how compaction can impact backup and recovery, perform an experiment to validate our hypothesis and finally evaluate techniques in Apache Cassandra to deal with compaction. Our conclusion is that compaction can have a significant impact on the size of your backup and can negate the benefits of compression or deduplication.


Typically, compaction is done in a database for two primary reasons:

  1. To reduce the storage usage.

  2. To improve read performance by merging keys and obtaining a consolidated index.

For example, in Apache Cassandra, data files are merged periodically to form compacted SSTables. However, it may have a significant impact on backup and restore strategy. There are two major compaction strategies that are used by Cassandra.

Size-tier: Size tiered compaction strategy combines multiple SSTables of similar sizes.

Level compaction: Data files are organized in levels and every time a file of certain level attains certain size, its contents are spread and merged with various files that are one level up.

Impact on Backup and Recovery

One of the key requirements for any backup and restore solution is to perform incremental backup forever and point-in-time restore. Incremental backup forever means that only changed data blocks are protected upon every backup interval, thus resulting in saving network bandwidth, storage consumption, and also reduces operational recovery point objective (RPO). A typical implementation of incremental backup requires periodically taking snapshot of the disks and identifying the dataset that has changed since the last backup.

The most commonly used technique for incremental backups is periodic snapshots. However, if this technique is applied to modern databases that perform compaction, then the same record or a set of records (can be in TBs at scale) will be copied multiple times. The frequency of compaction, and hence the total amount of duplicate data copied, depends on several factors, like database change rate, compaction strategy, configuration, storage bandwidth, etc.

Compaction Experiments

We had a hypothesis that the compacted Cassandra data files can increase storage multi-fold and a customer would experience bloated secondary storage due to lots of redundant backup data. Admittedly, a lot of other factors play a role in determining the increase in size such as initial data size, database change rate, time to live (TTL), backup frequency, and workload type (update, delete, insert).

We performed an experiment to validate this hypothesis and measured the amount of duplicate data that was backed up. The assumptions that we made in the experiment are listed below:

  • Initial Data: ~200 GB
  • Change rate : ~4.5% per day
  • Backup interval: every 8 hours
  • Workload type: insert only

Size tier: For the first five days, we backed up 2 times the amount of data added to the system. In the next two2 days, the overall data copied jumped to 4 times the original data (this could be due to relatively larger size compaction).

Level compaction: Over a period of five days, we backed-up 12 times the amount of data which we added to the system. Level compaction gets triggered quite often and our backup frequency of eight hours saw the same data getting captured multiple times throughout the day due to multiple rounds of compaction.

The experiment clearly demonstrates that compaction can cause a multi-fold increase in secondary storage requirements thus leading to a huge CAPEX burden on customers. Case in point: in the example above, secondary storage was as high as 12 times the primary storage for level compaction.

Cassandra Techniques to Deal With Compaction

Cassandra does provide an incremental backup option. With incremental backup, Cassandra creates hard links to every newly added non-compacted data file. One can directly backup from the backup folder. However, there are some serious shortcomings in this approach:

  • Granularity: This is not per table basis; it is global and across entire database. This means even if you want to backup selected tables (granular backups), incremental backup will happen for the entire database.
  • Space: Given the fact that Cassandra does not provide a utility to remove the hard links (responsibility is with backup admin or software), there is a huge risk of storage overflow on production cluster (any significant risk on production is huge). Something like secondary storage service deciding the fate of primary storage service!
  • Operational issues: This is very relevant for production environment that happens if someone or some script accidentally disables incremental backup option. Older versions of Cassandra require server restart to enable this feature and same for disabling.


Ignoring compaction in your backup and recovery strategy can cause a multi-fold increase in secondary storage requirements, and negate the benefits of compression and deduplication (typically provided by secondary storage vendors). You can try features like incremental backup if you are an expert and ready to invest a good amount of time in designing and maintaining your custom backup and restore operations.

apache cassandra ,compaction ,database ,performance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}