Azure Outage Post-Mortem — Part 1
Azure Outage Post-Mortem — Part 1
See what we're learning from the postmortems of last week's huge Azure outage and how Availability Zones might have softened the blow.
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
The first official Post-Mortems are starting to come out of Microsoft in regards to the Azure Outage that happened last week. While this first post-mortem addresses the Azure DevOps outage specifically (previously known as Visual Studio Team Service, or VSTS), it gives us some additional insight into the breadth and depth of the outage, confirms the cause of the outage, and gives us some insight into the challenges Microsoft faced in getting things back online quickly. It also hints at some features/functionality Microsoft may consider pursuing to handle this situation better in the future.
As I mentioned in my previous article, features such as the new Availability Zones being rolled out in Azure might have minimized the impact of this outage. In the post-mortem, Microsoft confirms what I previously said.
"The primary solution we are pursuing to improve handling datacenter failures is Availability Zones, and we are exploring the feasibility of asynchronous replication."
Until Availability Zones are rolled out across more regions the only disaster recovery options you have are cross-region, hybrid-cloud or even cross-cloud asynchronous replication. Software-based #SANless clustering solutions available today will enable such configurations, providing a very robust RTO and RPO, even when replicating great distances.
When you use SaaS/PaaS solutions you are really depending on the Cloud Service Provider (CSPs) to have an ironclad HA/DR solution in place. In this case, it seems as if a pretty significant deficiency was exposed and we can only hope that it leads all CSPs to take a hard look at their SaaS/PaaS offerings and address any HA/DR gaps that might exist. Until then, it is incumbent upon the consumer to understand the risks and do what they can to mitigate the risks of extended outages, or just choose not to use PaaS/SaaS until the risks are addressed.
The post-mortem really gets to the root of the issue...what do you value more, RTO or RPO?
"I fundamentally do not want to decide for customers whether or not to accept data loss. I've had customers tell me they would take data loss to get a large team productive again quickly, and other customers have told me they do not want any data loss and would wait on recovery for however long that took."
It will be impossible for a CSP to make that decision for a customer. I can't see a CSP ever deciding to lose customer data unless the original data is just completely lost and unrecoverable. In that case, a near real-time async replica is about as good as you are going to get in terms of RPO in an unexpected failure.
However, was this outage really unexpected and without warning? Modern satellite imagery and improvements in weather forecasting probably gave fair warning that there was going to be significant weather-related events in the area.
With hurricane Florence bearing down on the Southeast US as I write this post, I certainly hope if your data center is in the path of the hurricane you are taking proactive measures to gracefully move your workloads out of the impacted region. The benefits of a proactive disaster recovery vs a reactive disaster recovery are numerous, including no data loss, ample time to address unexpected issues, and managing human resources such that employees can worry about taking care of their families, rather than spending the night at a keyboard trying to put the pieces back together again.
Again, enacting a proactive disaster recovery would be a hard decision for a CSP to make on behalf of all their customers, as planned migrations across regions will incur some amount of downtime. This decision will have to be put in the hands of the customer.
Hurricane Florence satellite image taken from the new GOES-16 Satellite, courtesy of Tropical Tidbits.
So what can you do to protect your business-critical applications and data? As I discussed in my previous article, cross-region, cross-cloud or hybrid-cloud models with software-based #SANless cluster solutions are going to go a long way to address your HA/DR concerns, with an excellent RTO and RPO for cloud-based IaaS deployments. Instead of application-specific solutions, software-based, block level volume replication solutions such as SIOS DataKeeper and SIOS Protection Suite replicate all data, providing a data protection solution for both Linux and Windows platforms.
My oldest son just started his undergrad degree in Meteorology at Rutgers University. Can you imagine a day when artificial intelligence (AI) and machine learning (ML) will be used to consume weather-related data from NOAA to trigger a planned disaster recovery migration, two days before the storm strikes? I think I just found a perfect topic for his Master's thesis. Or better yet, have him and his smart friends at the WeatherWatcher LLC get funding for a tech startup that applies AI and ML to weather-related data to control proactive disaster recovery events.
I think we are just at the cusp of IT analytics solutions that apply advanced machine-learning technology to cut the time and effort you need to ensure delivery of your critical application services. SIOS iQ is one of the solutions leading the way in that field.
Batten down the hatches and get ready, Hurricane season is just starting and we are already in for a wild ride. If you would like to discuss your HA/DR strategy reach out to me on Twitter @daveberm.
Published at DZone with permission of David Bermingham . See the original article here.
Opinions expressed by DZone contributors are their own.