There’s a gap in the continuous delivery and DevOps story. Faster service delivery and tight business/IT alignment are certainly ushering in a new approach to development. However, it is imperative to ensure that the containers receiving the deployed code are right-sized for the application architecture and the expected production load.
So what do you do? Most companies rely on multiple fragmented tools to monitor their IT infrastructure and apps. However, these point solutions fail to provide the big picture visibility you need to respond to business demands.
The issue is brought into sharp relief in the cloud environment. Here, the ultimate promise is that you only pay for the services you use. If the cloud environment isn’t sized correctly, you’re likely to be paying for unused capacity, or worse, your app cannot meet peak demand and become unresponsive. This is a waste of precious resources, and money, as well as potentially lost revenue and prestige or brand trust.
For example, in the United States Cyber Monday is the Monday following the Thanksgiving holiday. It is a day that online retailers offer exceptional bargains. This past Cyber Monday, November 30, 2015, Target Corp. was unable to handle the surge of e-commerce traffic to its site and the website became inaccessible to many users, many of which were in the middle of making purchases. Many online guests were shown the message shown below instead of their shopping cart, or product catalog.
Target Corp was in good company that day. Newegg, HP and newcomer Jet.com, Saks (mobile), Victoria’s Secret, Shutterfly, and Footlocker all suffered delays or outages. PayPal also suffered an outage which prevent shoppers from buying goods on sites that use PayPal’s payment service.
For all the talk and hype surrounding DevOps and Continuous Delivery, it seems that just doing things faster is far too short sighted a goal and myopic in scope. What is needed is an automated IT performance and capacity management solution — like Automic Sysload for instance — that can anticipate future computing requirements and is included in and part of a DevOps practice or Continuous Delivery pipeline.
This is a powerful force in Continuous Delivery and DevOps. It means you can perform in-depth strategic analysis of your load tests, indicating what the deployment environment for the new or updated service should look like. What would you prefer: a calculation from your existing patchwork of tools showing you need 18 nodes, or a comprehensive forecasted prediction, including projections and baselines, showing you only need 10 nodes? I think we know the answer.
An automated approach to predictive capacity management enhances your DevOps strategy and Continuous Delivery capability in other ways. A dashboard view of your IT resources use can be used to identify saturation points to reduce risk and reduce the chance of an outage. Simulations can model virtual and physical application workload placement on existing infrastructure to determine the impact of long-term changes.
Predictive capacity management can also act as a powerful negotiating tool when dealing with cloud providers. By accurately sizing the application during the QA stage, for example, you can determine a more accurate pricing forecast. Better still, you can query the cloud providers to find the best deal for your hosted service.