A Modern Database Backup and Recovery Checklist
Do you have single points of failure? How do you handle deduplication? These questions and more need consideration when you're dealing with database backup and recovery.
Join the DZone community and get the full member experience.
Join For FreeEnterprises are quickly onboarding next-generation, mission-critical applications, such as eCommerce, artificial intelligence, machine learning, and IoT on non-relational databases and distributed local storage. However, as organizations move these applications to scalable databases because they are easier to deploy, always available, and cost effective, they also need a robust and scalable backup and recovery solution to protect these modern databases. The challenge is that although enterprise adoption of next-generation applications has been rising, the risk of data loss is prevalent. The reason is that non-relational databases lack any viable data protection and recovery solution. Here is what I mean:
- Currently, there is a lack of a solution that allows for corrupted data to be removed, replayed, and propagated with minimal downtime to customer-facing applications.
- Reputable studies have concluded that as much as 75% of data disasters, ultimately leading to downtime is the result of some sort of human error. Because human errors have a big impact on business operations and occur randomly, they cannot be solved by solutions that are offered today, such as native database replication or node-level snapshots.
- Database replication does not provide point-in-time backup and any-point-in-time recovery so enterprises cannot go back and fix these errors. In fact, if errors are introduced, the databases’ redundant-node replication can lead to almost immediate corruption across all nodes of the database.
So, as you begin to explore backup and recovery solution for your next-generation applications, below is a list of items to keep in mind.
Storage Is No Longer Unified or Shared Storage
A key architectural transition is occurring where monolithic pools of shared storage are no longer used for next-gen applications and databases. Next-generation databases are scale-out in nature and use commodity storage at a node level, creating a highly distributed storage pool. This distributed storage pool provides both low cost and high performance, but is not conducive to using traditional database backup and recovery tools which rely on database durability logs (from a shared storage LUN) for a point-in- time backup.
Eventual Consistency Has a Big Impact on Backup and Recovery
Next-generation databases are not built around the traditional ACID (Atomicity, Consistency, Isolation, Durability) transaction model. Transactions are a mechanism for guaranteeing these properties; they are a way of grouping related actions together such that as a whole, a group of operations can be atomic, produce consistent results, be isolated from other operations, and be durably recorded. Eventual consistency, however, brings forth new challenges to generate a consistent point-in-time view of the database for operational recovery or test/dev needs. Therefore, novel and scale-out data protection technology to address eventual consistency is needed.
Deduplication Matters at Scale
With traditional databases, unified or shared database storage, block level snapshots made image-based recovery and rollback relatively efficient. Distributed storage combined with a scale-out database that commonly involves replication of data rather than joins across tables lead to a rapid proliferation of redundant data, and increasing storage requirements (3-4 copies of every write operation). Effective data reduction at scale is critical to success and economics for effective data protection in this environment.
Single Points of Failure Do Fail
Legacy data protection approaches have commonly involved a media server which presents a choke point for secondary data path and single point of failure. Given the very high ingest rates coupled with the real-time nature of applications deployed on modern scale-out databases, next-generation data protection infrastructure has to be distributed to handle performance (RPO and RTO windows) and resiliency for always-on environments.
Checklist for Modern Database Data Protection
- How do you protect your next-generation scale-out databases today? What scenarios of data loss are you worried about?
- What downtime can your applications or business handle in the event of data corruption?
- Have you validated backup and recovery for your next-generation databases?
- Can you prove compliance with regulatory or line-of-business requirements for backup of your critical next-generation infrastructure?
- Can you recover to a desired or any point in time quickly? What manual effort is required to do this?
- How much storage is required to store your desired pool of recovery versions?
- Does your data protection approach scale out with your application and data growth?
- Can you efficiently refresh your test/dev environment to a smaller or larger cluster size as needed?
- What happens to your backup solution when a database node fails? Can you still perform backups? How about intermittent network failures?
- Native formats free you from vendor lock-in and allow for data management services, such as search and analytics on secondary data.
- Can you recover your data in native format?
Opinions expressed by DZone contributors are their own.
Comments