To Err is Human!
Data protection tooling is critical for enterprises to help them deploy and scale mission-critical applications with confidence. Check out Datas IO and review your data protection strategy.
Join the DZone community and get the full member experience.Join For Free
If you accidentally drop your iPhone, there are several potential outcomes and only a few of them are good. So when I accidentally dropped my phone last week, part of me was hoping that it would be safe, while the other part of me was regretting not buying a protective cover. After all, I had put in considerable effort to port my contacts and applications to this phone. But, like I said, to err is human.
There is a clear parallel to this in enterprise IT, specifically for application architects, database administrators, and DevOps. Numerous next generation applications (IoT, Security Analytics, SaaS, etc.) are now being developed on scale-out databases (such as Apache Cassandra, MongoDB, Amazon DynamoDB, etc.) or ported from traditional relational databases (Oracle, SQL Server etc.). It goes without saying that logical schema corruptions by developers or operational errors by DevOps are common in enterprise IT environments. Fortunately, enterprises realize this and have invested billions of dollars in traditional data protection products, such as CommVault, Symantec, and NetBackup to protect against such "err" moments. However, the irony is that this suite of data protection products that exists for traditional relational databases does not exist for next generation scale-out databases. Data protection tooling is critical for enterprises to help them deploy and scale mission-critical applications with confidence. And, this is precisely the reason why we started Datos IO.
At Datos IO, our vision is to make data recoverable at scale for next-generation databases. We have shared our motivations, the problem and the industry-first product that we are developing with Nik Rouda, Senior Analyst at Enterprise Strategy Group, and are humbled by ESG recent report — Solution Showcase on Datos IO. Please give it a read and leave your comments for us!
Data protection is key to building robust enterprise applications to ensure (1) no financial or productivity loss results from data loss or application downtime, (2) compliance and regulatory obligations are met, and 3) test/dev organizations are empowered for continuous development. A comprehensive data protection strategy ensures that data can be recovered in the event of disasters, failures, errors, or malicious virus attacks. Now, the good news is that next-generation databases natively provide replication that allows for a degree of availability by keeping multiple replica copies of data. This ensures that a secondary copy of data is available in a scenario that the primary data copy fails. Some databases take it a step further by keeping these replica copies geographically dispersed. This provides availability in the event of natural disasters.
However, replication is not backup. That is why enterprises today have a gap in addressing their data protection requirements. Enterprises value their data and need to address point-in-time backup use cases for that "what if" moment. Most studies suggest that more than half the time data loss is triggered by human error, corruption, or virus attacks. While replication provides protection from hardware failures or even natural disasters, it is insufficient to meet enterprise data protection requirements. For example, if a schema corruption impacts the primary data copy, all replicas are impacted as well! A complementary solution to tackle this challenge is taking point-in-time backup of critical databases. In its simplest form, this could be a snapshot of all the nodes in a cluster that is transferred to a backend storage. However, given the distributed nature and frequent hardware failures in scale-out databases, these patchwork (node by node snapshots) solutions become operational nightmares to manage. In the best scenario, it takes several days to recover data (resulting in significant application and business downtime) and in the worst scenario, the data may never be recoverable.
For scale-out databases, solving the problem of efficient backup and recovery is relatively important as Nik highlights in his report. As you are architecting your next application that runs on scale-out databases, make sure that you review your data protection strategy. Sometimes we are lucky as I was when my phone landed safely on the floor but ‘luck is a very thin wire between survival and disaster.’
Published at DZone with permission of Jeannie Liou, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.