Why the New York Stock Exchange “Glitch” Should Bring Attention to Continuous Delivery
What does the recent New York Stock Exchange glitch have to to with continuous delivery? Read on to find out.
Join the DZone community and get the full member experience.
Join For FreeThe four-hour outage at the New York Stock Exchange on July 8 brought renewed focus to businesses’ and consumers’ growing dependence on technology and the problems that arise when that technology goes awry.
Officials with the 223-year-old NYSE said a computer glitch was to blame for the shutdown. They said in a statement late in the day that the computer problem involved a configuration issue, though they didn’t elaborate. Earlier in the day, an unnamed trader told the New York Times that exchange officials informed them that the problems were related to a software update.
While we have far too little information to say with any certainty what exactly caused the “glitch”, last week’s near disaster is as good a time as any to bring attention to software best practices which have proven to reduce the risks of downtime, while enhancing and reinforcing security, and mitigating risk.
Continuous delivery is a proven software and database development best practice which is all about reducing the cost, time, and risk of releases, to enable software to be deployed to production in small increments, frequently and regularly, sometimes as often as many times per day.
Continuous delivery reduces the risk of failed releases and leads to more reliable and resilient systems. Last year, Puppet Labs did some research on the effect of using DevOps practices (in particular automated deployment processes and the use of version control for infrastructure management). The data showed that high performing organizations ship code 30 times faster (and complete these deployments 8,000 times faster), have 50% fewer failed deployments, and restore service 12 times faster than their peers. Research has also shown that the longer they had been employing these practices, the better they performed.
Automation is one of the main tenants of Continuous Delivery. Essentially what this means is that an engineer checks in code and the code gets to production with no manual steps. This doesn’t mean the code is deployed blindly or recklessly, the goal of implementing continuous delivery is to do all of the stuff you normally do during a deployment that ensures quality, but automate each step so it just happens.
The key here is to automate everything, your builds, your testing, your releases, your configuration changes and everything else. Manual processes are inherently less repeatable, more prone to error and less efficient. Once you automate a process, less effort is needed to run it and monitor its progress – and it will ensure you get consistent results.
This requires change in process and culture to work, but it’s worth it for the benefits. Practice makes perfect and automation is the next level – perfect, repeatable, reliable and efficient – making your infrastructure far less likely to suffer from major “glitches” which have been far too common in recent years.
DBmaestro is currently conducting a survey to explore the risks associated with the different practices involved in Database Development & Deployment. Please fill out the short survey and you can win beats by dre!
Published at DZone with permission of Yaniv Yehuda, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments