Using DevOps Managed Services to Improve Operations With Automated Deployments and Data
With greater access to more data and automation throughout the development process, more enterprises are using managed services.
Join the DZone community and get the full member experience.Join For Free
In the Digital Revolution, timelines for product delivery and information analysis are slim. Customers set the pace by consuming products and information on-demand — their way. This places immense pressure on businesses to deliver continuously and reliably to satisfy the rapidly escalating demand for all types of goods. Software is the center of the business universe, vital to all aspects of operations. Building and reliably delivering software is now vital to short and long-term success.
According to IDC, as of 2017 more than 580 million software applications were on the market, and another 500 million applications are expected to surface by 2022. Even if they are off by a couple of hundred million, this volume presents a tremendous amount of stress on operations. Our own research indicates that there are more than 100 million companies and government organizations worldwide (~30% in the United States).
As businesses continue to maintain and modernize older applications, they are also creating and delivering new applications that in the sum total, can wear out their staff and budgets while increasing technical and process complexities between organizations.
Typically, IT operations are responsible for providing the foundation for development by controlling and maintaining the systems that their internal customers require. Inability to deliver not only dissolves credibility and confidence, it forces software engineering teams to find alternatives to satisfy their requirements by working around the barriers — “shadow IT.” Gartner studies document that in 2017, shadow IT was 30 to 40 percent of IT spending in large enterprises, and research by Everest Group found that it comprises 50 percent or more of IT spending.
Corporate business drivers dictate the investments and use of technologies and systems. In the digital economy, failure to deliver, delivering the wrong solutions, or delayed delivery greatly affects organizations’ ability to satisfy required business outcomes. Forrester tells us that a significant portion of technology spend is devoted to software engineering infrastructure. The blend of workloads, applications and broad access to the resources paired with consistent delivery methods to support software development is vital. How technology is used is as equally important as to the methods and processes for creating and delivering software code.
Traditional software development methods, such as Waterfall and Agile, strains and often blocks process efficiencies, negatively affecting time to market, denying customers the information and products they demand, impacting revenue, elevating operation costs and creating a lack of predictability. In Agile software development methods, Scrum Masters and development directors capture and report data. However, our experience reveals that capture frequency is not consistent, largely manual, data sets are limited and comparing and sharing across organizations is inexact.
Capturing and synthesizing the data and deriving intelligence is complicated. Nevertheless, a good starting point is measuring lead-time for changes, deployment frequency, mean time to restore (MTTR) and change fail rate. Tracking these variables aids in the adoption of DevOps methods. Thousands of companies are moving forward with this. However, the collection should be automated.
Most development teams have limited visibility across and within their software “production” — the coding and delivery processes. Visibility is paramount and getting the process data into the hands of key stakeholders is critical. Lead-time, deployment frequency, MTTR, and change failure data enabled with a complete and automated delivery and a mapping of the value-stream can provide great value to the enterprise. Value streams can be organized into single-form processes, and cross-mapped to stakeholders and capabilities and correlated to the data streams. It is also important to obtain operating data from systems and applications and the infrastructure involved in the software development processes.
A very powerful and proven method for companies to access and wrap their arms around the data and constraint resolution is through DevOps managed services. These services provide real-time visibility into the integrated technical-development operations processes. Data is generated from the use of hardware and software systems in response to the actions of the contributors in the value-stream from code design to release and into production.
DevOps managed services help organizations identify vulnerable process areas to remedy and improve suspect code and provide feedback for continuous improvement. Code scanning detection also helps identify code weak points and anomalies and improvement, providing improvement assurance. When there are fewer issues, the value-stream operates more efficiently, placing less stress on the contributors, including testing processes and the infrastructure they leverage to produce and reliably deliver.
A client had about 300 applications with shared software pipelines and they were blowing up all the time. What they originally had was a team that did incident management with no automation and reliability and four dedicated technicians working around the clock to oversee process and delivery operations.
Their typical high-priority outage was 48 hours and a period during which their customers could not purchase products through their application — which was lost revenue. In the first 24 hours, they determined what and where the problem was in the greater process and the impact points within the hardware and software infrastructure and then consume another 24 hours to resolve. By incorporating DevOps managed services into their existing operation and identification process, they dropped the total time of outage to three hours. Most organizations can see improvement within a very short period of time. Engineers on the front-line are quickly and easily able to identify and resolve challenges by themselves or with key stakeholders. The key here is to automate the delivery of the information and remediation processes and continue to shrink the time windows the root cause. Essentially, they went from silos of data to sharing data within teams and between teams to permanently resolve the root cause,
The managed service was able to reduce the total number of incidents, the mean time between failures, and the length of the incident, improving the repeatability of DevOps methods.
This is where Software-as-Infrastructure comes in. DevOps managed services should provide infrastructure that's composable, flexible, and automated so that the infrastructure is available to developers, and the workloads and the applications are easily accessible in a self-service format.
There's a tie-in between DevOps, and not just implementing DevOps tools and engineers, but in building repeatability into methodologies, that's enabled by the managed infrastructure. Organizations can only do repeatability and reliably deliver code if it can guarantee resource availability. That includes both physical and virtual resources and the ability to repeat a methodology to create the next batch of code and to circumvent the issues that interrupted up time the first time. In other words, optimizing the learning process, because DevOps is a process of continuous learning.
To improve the software prowess of an organization and its ability to deliver software code in a timely, efficient manner IT leaders buy managed services including monitoring, automation, testing and CI/CD. It really comes down to having the visibility and the information that's gathered to be able to assess the situation and then figure out remediation. In addition, some enterprises are using managed services for CI/CD and deployment. They want to overcome an overall lack of visibility in their software production line.
The companies that buy DevOps managed services are companies that have limited visibility in their software development lifecycles, and have little ability to the access the information, i.e. they don’t have availability of the information in a timely manner. In addition, it’s organizations that are on an accelerated curve to improve their speed-to-market, and also improve the quality of what they produce in at the same time. They need real-time data to understand their situation and be able to diagnose and generate a path to mediation and resolution and to continuously improve their process by resolving on-going situations. Ultimately, resolving as much as can be resolved and automated in a process that's repeatable is the goal of the managed services buyers
Enterprises looking at DevOps managed services generally have a lack of visibility into the development and deployment process and certainly a lack of ability to improve it about it in a timely manner.
DevOps managed services gives adopters more of an established iterative process to improve the software development lifecycle to increase quality and velocity. It allows engineers to capture data faster and to be able to leverage it to be actionable. Engineers get data sooner in the process to keep the quality high and continue the workflow, but also to eliminate bottlenecks and constraints and things that get weighed down in the development process.
Opinions expressed by DZone contributors are their own.