Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

What Is the Top Cause of Application Downtime Today?

DZone's Guide to

What Is the Top Cause of Application Downtime Today?

Are companies causing application downtime by testing at the wrong time? Visibility of code-level issues is just the start of good performance.

· Performance Zone ·
Free Resource

xMatters delivers integration-driven collaboration that relays data between systems, while engaging the right people to proactively resolve issues. Read the Monitoring in a Connected Enterprise whitepaper and learn about 3 tools for resolving incidents quickly.

I frequently talk to our customer base about what keeps them up at night. While there is a large variance of answers, they tend to fall into one of two categories. The first is the conditioned fear of some monster lurking behind the scenes that could pounce at any time. The second, of course, is the actual monster of downtime on a critical system. Ask most tech folks and they will tell you outages seem to only happen late at night or early in the morning. And that they do keep them up.

Entire companies and product lines have been built around providing those in the IT world with some ability to sleep at night. Modern enterprises have spent millions to mitigate the risk and prevent their businesses from having a really bad day because of an outage. Cloud providers are attuned to the downtime dilemma and spend lots of time, money, and effort to build in redundancy and make "High Availability" (HA) as easy as possible. The frequency of "hardware" or server issues continues to dwindle.

Where Does the Downtime Issue Start?

In my discussions, most companies I have talked to say their number one cause of outages and customer interruptions is ultimately related to the deployment of new or upgraded code. Often I hear the operations team has little or no involvement with an application until it's put into production. It is a bit ironic that this is also the area where companies tend to drastically under-invest. They opt instead to invest in ways to "Scale Out or Up." Or perhaps how to survive asteroids hitting two out three of their data centers.

Failing over broken or slow code from one server to another does not fix it. Adding more servers to distribute the load can mitigate a problem, but can also escalate the cost dramatically. In most cases, the solutions they apply don't address the primary cause of the problems.

While there are some fantastic tools out there that can help with getting better visibility into code-level issues — such as New Relic, AppDynamics, and others — the real problem is that these often end up being used to diagnose issues after they have appeared in production. Most companies carry out some amount of testing before releasing code, but typically it is a fraction of what they should be doing. Working for a company that specializes in open-source databases, we get a lot of calls on issues that have prevented companies' end users from using critical applications. Many of these problems are fixable before they cost a loss of revenue and reputation.

I think it's time technology companies start to rethink our QA, Testing, and Pre-Deployment requirements. How much time, effort, and money can we save if we catch these "monsters" before they make it into production?

Not to mention how much better our operations team will sleep...

Discovering, responding to, and resolving incidents is a complex endeavor. Read this narrative to learn how you can do it quickly and effectively by connecting AppDynamics, Moogsoft and xMatters to create a monitoring toolchain.

Topics:
performance ,software testing ,code quality

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}