The Role of Predictive Analytics in DevOps
Learn how data and predictive analysis can be used by DevOps engineers to further develop and optimize the DevOps workflow.
Join the DZone community and get the full member experience.
Join For FreeThe role of predictive analytics while implementing DevOps processes/tools is an important area, to consider given the fact that many organizations are now focusing on DevOps to help improve their value stream and customer satisfaction. Every year, the rate of change is increasing due to new technologies, new competitors, and new business models that keep testing the existing organizations present in the market. Hence, the organizations must be able to respond to change very quickly, and at the same time, maintain low cost and better quality than the competitors.
DevOps
DevOps (a shortened form of the union of development and operations) could be defined as a software development and delivery process that focuses on communication, coordination, and collaboration among product management, product development, application software development, and operations specialists during the entire product/service lifecycle, right from requirements, to design, through the development process to production support. However, we also have multiple definitions for DevOps based on different perspectives. In the predictive analytics context, the definition of DevOps on which I would like to focus is: the outcomes obtained (adding value and improving flow) by applying the lean and agile principles to the IT value stream line could be called DevOps.
The amount and scale of data that is now available for analysis are increasing day by day. Additionally, not only data, but different types of data are also available- more specifically, the availability of large amounts of machine data has created the opportunity for machine data analysis. Utilizing machine data- more specifically, logs and metrics- helps to develop the DevOps workflow. Advanced analytics and the availability of a predictive analytics platform helps the DevOps engineer to identify trends and patterns in the tera- and petabytes of data and detect what could likely happen with your DevOps pipeline or your infrastructure. The DevOps engineer could do a quick root cause analysis on the production issue/outage and ensure that the resolution of the issue helps to keep the pipeline flow constantly moving forward. Due to the vast daily amounts of data available, a large amount of this data could turn to noise if the analysis is not done appropriately, and this could mask the key trends and patterns that the data is trying to indicate to the user of the data. With predictive analytics, the DevOps/Infrastructure/Operations engineer can identify the patterns in the data even if he cannot quickly respond to all the issues in real time. For example, if there is a memory drop or file type issue that could occur, predictive analytics helps you to forecast when these type of events could occur again and this helps the DevOps engineer be better prepared to handle them in the future.
Predictive Analytics
For organizations that are already ahead on the DevOps implementation curve, it becomes crucial that they strive to improve their outcomes further by focusing on other interdisciplinary/cross domain areas, which can facilitate improved value outcomes in the future. This helps the organizations to stay ahead of the competition in the future. In order to be one step ahead of the competition, one of the techniques is the utilization of predictive analytics in the DevOps processes. Predictive analytics is a branch of advanced analytics and it is used to forecast or predict unknown future events. It helps organizations to understand evolving and emerging needs and facilitate the adaptation and responsiveness to change much before their competitors could respond. As compared to the traditional analytics tools which provide a post mortem view after the event has occurred, predictive analytics focuses on applying algorithms to predict future outcomes. This capability needs to be integrated as part of the software development life cycle and this will lead to improved delivery and quality. Predictive analytics focuses on the past historical data set and searches for causal relationships between individual data points to predict future trends like sudden changes in user behavior that could lead to drop in the volume of transactions, or when potential service outages could occur.
In order to understand the role of predictive analytics in DevOps, we will need to consider the interdisciplinary areas of DevOps and artificial intelligence (AI), statistics, data analytics programming, and machine learning. We also need to understand the context related to the deployment, release or other areas before we introduce predictive analytics in the workflow. We should thus also explore if the tool vendors could also incorporate predictive analytics in their tools to build efficiency in the DevOps delivery. This leads to a reduction in administrative costs. It also leads to improved speed to deliver innovative services to the customer. The usage of predictive analytics also helps to build continuous improvement focus on the DevOps services. This implies that we could adjust our pipeline variables (build, release and other events) so that the future forecasts can be modified instead of reacting to the occurrence of the events in a reactive and post mortem way. This leads to another change in the culture around DevOps that will occur on account of the usage of predictive analytics. Similarly, if we are able to focus on the forecast of what changes we could make to avoid future issues and thereby change the predicted outcomes, we have entered the realm of prescriptive analytics, which is one step ahead of predictive analytics. On account of the increasing availability of cloud based analytics computing power, the usage of predictive analytics is also beginning to increase and its applicability to IT processes, which includes application development, and operations has also increased. This could be a major disruptive force that will enable organizations to deliver software quickly with better quality (fewer defects).
As per a 2016 report (The Cost of Server, Application and Network Downtime) by research firm IHS, the downtime in the information and communication technology (ICT) services costs the North American organizations around US $700 billion every year. The bevy of physical servers, middleware, and virtual machines and other complex infrastructural resources involving many applications leads to a host of metrics that need to be monitored, thereby making it very difficult for a human to predict accurately when the next outage or service degradation could occur. However, using predictive analytics and adaptive algorithms can lead to better predictions of service degradation much before an outage occurs. This helps to manage the service degradation in a better manner before the outage can occur. The algorithms measure the capacity of the server and the real time demand and it can make allocations and adjustments immediately and this helps to reduce the stress on the application resources. This data and activity can be recorded and it can be reviewed subsequently. The Infrastructure and Operations teams can now see exactly how the resources and services are being used and they can also see how it could likely be used in the future.
The predictive analytics service can provide the best value outcome for the business, organization and the users if it focuses on the following two key characteristics –
- Focus on the aggregation of data.
- Ensure the discovery of service issues as they are emerging (using this data aggregation – current and historical).
The predictive analytics service should also take into account how the users are interacting with the application through several devices – mobile, cloud, online and laptop/notebook. It should then be able to view the application in context and also identify the root cause of the issues before they become problems and lead to outages by monitoring the application code and if there are any dependencies like third party software or any other issues which could lead to slowdown. This also helps to prevent latency and manage the changing user requirements.
Dynamic Baseline Thresholds
Using dynamic baselines instead of manual thresholds facilitates a smarter approach to handling outages and issues. Dynamic baselining is a process where the algorithm monitors the key performance indicators of the application and then it develops its own unique technique for coming up with the thresholds. This is possible on account of machine learning, which helps the algorithm to shift the thresholds over time on account of the learning ability it has developed based on the aggregation of the past data. Hence, alerts are sent to the DevOps, Infrastructure, and Operations teams only when necessary by utilizing dynamic baselines as the algorithm takes care of the existing issues by re-allocating resources to prevent outages and manage user behavior trends by analyzing the existing and past data. Dynamic baselining ensures that thresholds are set based on the history, seasonality, and other factors rather than being based on a single threshold set by a human and which is static. This facilitates the transmission of fewer alerts to the DevOps, Infrastructure and Operations teams but these alerts are more intelligent, carry more value and are quickly actionable. This helps to reduce waste in the process (muda) and also manage the huge volume of incidents which are raised by the non humans (read algorithms/apps/systems).
Conclusions
A predictive analytics system cannot actually see into the future. However, DevOps, Infrastructure, and Operations teams can use predictive analytics to solve potential future issues that humans cannot see in the existing and past data due to the huge volumes of data available. Thus, this ability to view the potential future using predictive analytics and manage the variables to change potential future outcomes using prescriptive analytics helps the DevOps, Infrastructure and Operations teams to ensure appropriate flow in the pipeline and focus on improved value outcomes for the future. This is exactly where predictive analytics is going to be utilized in the future to help DevOps teams in organizations which are on the mature curve with respect to DevOps implementation in order to manage their pipeline flow and maximize value for the organization and which is also one of the main reasons why DevOps came into existence.
Opinions expressed by DZone contributors are their own.
Comments