Lessons Learned From the Typo That Brought Down AWS
If you felt the effects of the S3 outage, make sure your backup data is in a separate region, and know that recovery can be a slow, tedious process.
Join the DZone community and get the full member experience.Join For Free
Anyone not living under a rock knows that Amazon’s S3 storage service experienced a widespread, five hour outage recently in its East Coast availability zone, shutting down much of the internet including popular sites NetFlix, Spotify, Pinterest, and Buzzfeed, among thousands of smaller sites.
What’s most shocking, but not surprising, is the root cause of the outage…a typo! Rachel King said it well in a recent Fortune blog;
“Apparently all it takes to bring down the Internet isn’t a virus or malware or a well-organized, state-sponsored attack. A typo will do the trick.” I can only imagine how the poor soul who executed the offending command felt, probably something like this:
The reality is that humans are…human, and make mistakes. So, until that changes and while we all march inexorably into the cloud, there is one paramount lesson to be learned from this experience:
Don’t assume your data is protected. You may not realize it but internet providers are not responsible for your data resiliency, you are! If the cloud is in your current or future plans you need to make data protection a core pillar of your strategy.
Let’s look at why having your own cloud data protection strategy is critical. While Amazon S3 is architected for data durability, that doesn’t equal fast recoverability during an outage:
- Availability zones don’t equal recoverability. S3 is designed to withstand a site outage within a zone, but as last week's outage shows, networking issues can lead to a widespread outage across an entire region.
- Recovery can be slow and tedious. It’s one thing to backup data. It’s another thing entirely to recover it. It can take hours or days to recover data after a failure — especially for hyper-scale applications and databases.
- Data can get compromised or enter an inconsistent state. The cloud itself doesn’t protect data from application or database level corruption, or human error.
As you develop and deploy your data protection strategy, here are a few best practices to make sure you can recover quickly, even from a cloud outage:
- Keep backup data in another service or region. Failures like this one often affect an entire region. A data protection strategy needs to include the ability to recover in another region, cloud service or even a private cloud.
- Have a fast recovery process. Traditional backup solutions and scripting-based approaches can’t recover data quickly, particularly if the application needs to be recovered to a different topology.
- Have point-in-time recovery capability. Since data can get compromised at the early stages of an outage, being able to restore applications quickly to a point in time will save you time and money.
The bottom line is that bad things do happen to good clouds and humans will always be human. This shouldn’t slow your journey to the cloud, but it should open your eyes to the critical need for an effective cloud-first data protection strategy. You can’t ignore recoverability and resiliency of your data just because it’s in the cloud, and don’t expect legacy protection strategies to work in a cloud-first world.
Published at DZone with permission of Shalabh Goyal, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.