5 Dilemmas IT Operations Will Have to Deal With in 2016
5 Dilemmas IT Operations Will Have to Deal With in 2016
A look at what major challenges affect IT operations in 2016, courtesy of ActiveState.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
While the landscape of infrastructure, DevOps, tools, technologies and processes keeps shifting dramatically, the pressure to keep everything up and running keeps growing. With this increasing complexity, small scale problems can quickly escalate into full-scale issues, reverberating well beyond IT operations into every aspect of your business, impacting your bottom line. Here are five dilemmas we predict IT operations will have to deal with in 2016.
1. The High Cost of Application Downtime
Let’s face it. Application downtime is a dilemma your enterprise cannot afford. With the increasing complexity of IT infrastructures, it’s a huge challenge for organizations to fix outages quickly. On average, it can take IT operations more than half an hour to resolve an IT failure. Half an hour doesn’t sound like much at first, but when your business is software-defined (and most enterprises are these days), this could add up to financial disaster. International Data Corporation (IDC) has researched the implications of IT system downtime and found these other sobering facts:
- For Fortune 1000 companies, the average total cost of unplanned application downtime per year is $1.25 billion — $2.5 billion.
- The average hourly cost of an infrastructure failure is $100,000 per hour.
- The average hourly cost of a critical application failure is $500,000 — $1 million.
The facts speak for themselves: application downtime drastically impacts business costs. Resolving IT failures is, and always will be, one of the biggest challenges for IT operations and is a pressing dilemma to deal with in 2016.
2. Too Many Teams in the Problem-solving Kitchen
Figuring out what caused a problem is complicated. Every DevOps team has its own part to play in controlling and maintaining the total stack. But when problems occur, this often makes it difficult to determine where they originated. A simple scenario, which you’ll probably recognize, demonstrates this familiar dilemma:
- In the evening, the infra team upgrades some middleware with the help of their provisioning tool Chef. After a functional test everything seems OK.
- The next day the finance DevOps team detects a higher than normal error rate for the sales service. This is detected with the help of their Splunk dashboard.
- The finance team contacts the sales DevOps team. This team uses AppDynamics and sees that there is a time-out, but they can’t figure out what caused it.
- Next, a crisis team is formed, with people from different teams, including a member of the infra team. They collectively figure out that the middleware update most likely caused the problem.
- Finally, the infra team rolls back the middleware upgrade. The problem is solved, but valuable time is wasted.
This problem-solving scenario involves too many teams. To speed up the process, you could create a downtime action plan and have every team record their daily changes and upgrades, but that’s not really the best way to move forward. At StackState, we think the better way to deal with this dilemma is to fully automate the problem-finding process across teams, reducing the time-to-repair to a minimum.
Newer agile technologies and processes, such as DevOps, continuous deployments, containerization, micro services and private, public or hybrid cloud computing keep coming and changing rapidly. They come at a higher frequency, are more granular and introduce a more complex environment. As application updates and changes in the IT landscape grow exponentially, adapting becomes a complex dilemma, impacting IT operations tremendously. Yes, new DevOps solutions will pop up for each technology stack, and this is a good thing. But it’s time to deal with the adapting dilemma head on, by implementing an automated and integrated approach.
4. The DevOps “Freedom of Choice” Conundrum
We have written about freedom of choice in an earlier blog post. It seems like a good thing at first because it is important for DevOps teams to choose their own tools. But the problem presents itself when too many different tools are used within teams. It leads to multiple dashboards and data streams that require continuous reconciliation to understand the overall health of the team’s stack. This manual process is time-consuming and error sensitive. Since most teams use different tool sets while also depending on services from other teams, the lack of unified health data between them is a real game breaker. The ability to remove waste and find problems in the whole stack efficiently is the key driver for dealing with this dilemma.
5. Too Much Data, No Information
Organizations are using a wide variety of tools and systems for monitoring, deployment and incident management, producing a deluge of different types of data. Too much data isn’t a dilemma if it’s turned into useful information, but the challenge for IT operations is translating it all into something meaningful to the business.
IT operations store information in different silos or systems. Some organizations have started to apply big data analytics to a single type of operations data, like huge sets of metric streams, and this helps a bit. But without context, it doesn’t always show how a problem relates to critical business services. The lack of multiple and different data sources degrades the outcome. Data just for data’s sake doesn’t make sense. Want to be part of it? The software is now available for launching customers. Join our Private Beta Program and get the benefits!
Published at DZone with permission of Lodewijk Bogaards . See the original article here.
Opinions expressed by DZone contributors are their own.