DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Powering Manufacturing With MLOps
  • How to Become a DevOps Engineer
  • A Guide to DataOps: The New Age of Data Management
  • DevOps Best Practices

Trending

  • Zero Trust for AWS NLBs: Why It Matters and How to Do It
  • How to Convert XLS to XLSX in Java
  • Optimize Deployment Pipelines for Speed, Security and Seamless Automation
  • Artificial Intelligence, Real Consequences: Balancing Good vs Evil AI [Infographic]
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. What the Military Taught Me About DevOps

What the Military Taught Me About DevOps

Failing fast isn't unique to DevOps but it's an incredibly important strategy. Practicing failures and mitigating resulting issues is a standard function of DevOps teams.

By 
Chris Short user avatar
Chris Short
·
Feb. 07, 17 · Opinion
Likes (2)
Comment
Save
Tweet
Share
10.8K Views

Join the DZone community and get the full member experience.

Join For Free

This article is featured in the new DZone Guide to DevOps: Continuous Delivery and Automation, Volume IV. Get your free copy for more insightful articles, industry statistics, and more!

When we talk about failures in DevOps we are usually talking about how we want to fail fast. What is failing fast though? The term “fail fast” has several origins but the one that is referred to in Agile is probably the most appropriate for DevOps. Fail fast is a strategy in which you try something, it fails, feedback is delivered quickly, you adapt accordingly, and try again. Failing fast is the only way to fail in DevOps. Roll out a product, service, or application quickly, and if it doesn’t pan out, move on quickly.

We don’t often think of military failures in a positive light, and rightfully so. The Battle of the Little Bighorn did not work out well for General Custer. Operation Eagle Claw, the aborted operation to rescue hostages in Iran, resulted in loss of life and contributed to President Jimmy Carter’s failed re-election bid. And a nuclear mishap almost resulted in the creation of “a very large Bay of North Carolina,” according to Dr. Jack ReVelle, Explosive Ordinance Disposal officer. For the scope of this article, we’ll focus on this near-nuclear B-52 blunder, exploring what went wrong and applying the lessons learned from it to our DevOps craft today.

The 1961 Goldsboro B-52 crash is what I consider a near miss for humanity and a very well-timed wake-up call for the US military, who nearly bombed its own country. Around midnight on January 24, 1961 a B-52G Stratofortress was flying a Cold War alert flight out of Seymour-Johnson Air Force Base, North Carolina. These alert flights were part of the US military’s answer to what was believed to be a superior Soviet ballistic missile threat. The B-52G that took off that night in ‘61 had two Mark 39 thermonuclear weapons on board.

The Cold War was a very tenuous time in world history. The battle over who could deploy weapons fastest between east and west was far greater than any Vim vs. Emacs flamewar could ever hope to reach; it was almost unquantifiable. The US military commanders in charge of the nuclear weapons were so afraid of not being able to respond to an attack, that they routinely fought against the use of safeties in nuclear weapons. We can draw a parallel here to eager investors wanting to see return on investment or managers trying to meet unrealistic delivery dates.

Continuing our analogy, the role of development and operations will be played by the scientists and bomb makers of the era, who wanted weapons to fail safely (thus not exploding when something went wrong). The makers wanted safe backout plans laid out as part of the design, planning, and implementation. Meanwhile, the investors/managers (military commanders) wanted weapons that could be deployed quickly and cheaply. The makers and commanders were at odds in their philosophies. As a result, the Mark 39 bomb had safeties disabled when the aircraft carrying them was aloft. On this night in 1961, the B-52G carrying these weapons above Faro, North Carolina had a structural failure and broke up in mid-air.

The two Mark 39 bombs, clocking in at 3.8 megatons a piece (more than 250 times the destructive power of the Hiroshima bomb), plummeted to earth. One Mark 39 deployed its parachute (a part of a planned detonation of the weapon) and it was later discovered that three of the four safety mechanisms were flipped off during the accident; one step away from a nuclear explosion. This particular bomb landed in a tree and was safely recovered.

The other Mark 39 bomb did not deploy its parachute. Instead, it became a nuclear lawn dart and slammed into a swampy patch of earth at an estimated 700 miles per hour breaking apart on impact. The core (or pit) of the bomb was safely recovered. A complete recovery of all of the weapon’s components was not possible due to the conditions. As a result, a small chunk of eastern North Carolina has some fissile material leftover from the accident. There is a concrete pad in place to prevent tampering and a hastily written law that no farming or other activity will take place deeper than five feet at the site. There was no disaster recovery plan, and it showed.

The 1961 Goldsboro B-52 Crash is a prime example of failing fast going wrong. We were one step away from not one but two multi-megaton nuclear detonations on the US eastern seaboard. The US military did not want inert bombs to fall on the Soviet Union if a delivery system was somehow disabled. The investors/managers wanted the weapons to fail spectacularly. The development and operations teams wanted the weapons to fail fast and safely. Investors/managers did not want to bother with security testing, failure scenarios, and other DevOps-type planning, and this nearly resulted in catastrophic costs.

Luckily, for most of us following DevOps practices, we do not have lives or the fate of humanity in our hands. We can deploy things like Chaos Monkey in our production environments with little risk to life and limb. If you break something in stage, is it not doing its intended purpose: to catch bugs safely before they manifest themselves in production? Take advantage of your dev, test, and stage environments. If those non-production environments are not easily rebuilt, do the work to make them immutable. Automate their deployment so you can take the time to rigorously test your services. Practice failures vigorously; spend the time needed to correct or automate issues out of the systems.

Following the near disaster in Goldsboro, the US government conducted an amazingly detailed postmortem. Speaking with Dr. Jack ReVelle, the Explosive Ordnance Disposal (EOD) officer who responded to the accident, I learned that significant improvements were made to training and documentation. Security and safety became an iterative part of the development and deployment processes. Prior to this deployment, Dr. ReVelle was not trained in disaster recovery of nuclear devices, “We were writing the book on it as we went.” As a result of this accident, techs were taught how to manage the systems before deployment. Additionally, documentation was updated continuously to include necessary information about the systems as they were being developed. Better tooling was procured for the teams to manage incidents. In short, significant rigor was added to training programs, deployment plans, and documentation processes in the EOD teams. EOD teams now planned for failures in the way DevOps teams of today plan for failures.

One thing you have to keep in mind about everything procured by the US government is that it is delivered by the person or company that is the lowest bidder. In other words, a failure rate is expected with anything. The military has to practice failing in any and all forms that can be thought of just like DevOps professionals should. Practicing failures is important because it will improve your recognition time and response times to these failures. Your teams will build a sort of “muscle memory” to effectively quash issues before they become incidents. This “muscle memory” will allow your teams to iterate through one scenario while discussing other scenarios more calmly. Common sense is not common, so explicit documentation and processes are incredibly important. Remember, the most important part of failing is not the fact something failed, but how you respond to and learn from such failures.

More DevOps Goodness

For more insights on implementing unambiguous code requirements, Continuous Delivery anti-patterns, best practices for microservices and containers, and more, get your free copy of the new DZone Guide to DevOps!

If you'd like to see other articles in this guide, be sure to check out:

  • Continuous Delivery Anti-Patterns

  • 9 Benefits of Continuous Integration

  • Better Code for Better Requirements

  • Autonomy or Authority: Why Microservices Should Be Event-Driven

  • Executive Insights on the State of DevOps

  • Branching Considered Harmful! (for Continuous Integration)

  • Microservices and Docker at Scale

DevOps Continuous Integration/Deployment agile

Opinions expressed by DZone contributors are their own.

Related

  • Powering Manufacturing With MLOps
  • How to Become a DevOps Engineer
  • A Guide to DataOps: The New Age of Data Management
  • DevOps Best Practices

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!