5 Trends Shaping the Future of ITOM
5 Trends Shaping the Future of ITOM
We look at the trends that promise to bring great agility to the IT industry and increase teams' ability to meet changes in project requirements.
Join the DZone community and get the full member experience.Join For Free
The ITOM team often finds itself locked in a tug-of-war. The lofty objective of achieving speed, adaptability, and agility often stands at odds with the need to ensure stability and high uptime. The desire to help the business achieve accelerated growth in today’s fast-changing environment frequently heads into conflict with the compulsion to tighten purse strings and optimize costs.
For IT operations, the solution doesn’t lie in forsaking one goal to embrace another. It is about doing more with less, whilst delivering greater benefits in line with the wider business goals of the organization. Traditional paradigms no longer work.
A closer look at the upcoming trends is thus necessary to understand the complexities and direction of this critical function.
Cloud computing has left an indelible impression on IT practices. Gartner predicts that by 2020, only 20% of enterprise workloads will remain on-premise.
That IT will implement and manage hybrid clouds with a heavy and possibly dominant dose of public IaaS and SaaS is quite clear at this point. What is also irrefutable is that managing a mix of private cloud, virtualization, and public cloud platforms will pose new challenges for IT, from a monetary as well as a management perspective.
Workload migrations between on-premises and off-premises facilities – consisting of a hugely complex web of diverse providers—will be difficult and expensive. It will bring in many tricky problems in the areas of end-to-end performance monitoring, root cause analysis, and overall systems maintenance.
When the entire IT deployment is in your own data center, it is comparatively simpler to figure out precisely where performance issues exist. But, with multiple entities involved in a hybrid IT environment, the plot just gets way more convoluted. Does the issue exist in the user's access device, your data center, your network, the service provider's data center, their network, or the global network that connects everything together?
After all, if you can’t see it, how can you optimize it?
In this scheme of things, IT ops need the mythical ‘single pane of glass’ that offers visibility into the dedicated IT infrastructure, and as much of the public platform as the service provider will allow.
ITOM practices for hybrid cloud may still be relatively immature at the moment, but major vendors such as CA Technologies, IBM, and even ServiceNow have seen the writing on the wall, and are continuously expanding their standard systems management platforms to support this sort of approach.
The application container segment is expected to grow from $762 million in 2016 to $2.7 billion by 2020.
Ever since containerization exploded onto the scene, it has been creating ripples in IT circles. The technology promises to significantly transform the way IT operations are carried out. Containers are highly scalable and transient entities that allow new forms of agile application development. They provide for compartmentalization into microservices that can be run on physical systems or VMs.
As more and more application developers embrace containers as a primary tool for development, it would become vital for IT to provide the infrastructure and operations support needed to deploy and manage their container-based applications at scale. While application containers promise rapid scaling, flexibility, and ease of use, their arrival in the data center may prove disruptive.
A typical container lasts only for a few milliseconds, compared to a physical server which may last for years, or a virtual machine which may run for weeks or months. The highly scalable and transient nature of containers calls for a much more detailed control over workload management such as automation and orchestration. Data center admins will need to account for how they affect server capacity.
Hewlett Packard Enterprise (HPE) recently announced a major update of its IT Operations Management (ITOM) software applications, with new container-based versions. The new microservice and containerized architecture will allow simpler deployment and faster versioning, as well as improved scalability.
API-Friendliness: The New Norm
Deeply woven into the fabric of the new cloudification- and containerization-driven reality is an API-friendly infrastructure that automates and connects a wide and ever-growing range of software components.
Most of today’s dominant legacy ITOM tool interfaces are predominantly GUI-based, as opposed to being API-oriented. APIs in the ITOM space do exist; however, they are primarily in the realm of vendor and custom software integration, and less in the area of direct end-user interaction. In the current form, data is not widely shareable, so most of the rich data that IT infrastructure produces remains unused.
But, with the growing number and variety of devices connected to enterprise networks, it no longer makes sense to continue relying on legacy tools that build siloed practices around rigid GUIs.Without an open interface and common standards, data center operators end up spending a lot of time building system connections.
For instance, if an enterprise decides to change the amount of available storage, the data center admin has to do the same task twice over, once for systems running Microsoft Windows, and once again for those running Linux. With a standard interface, the IT team can make the change without the need to stress over whether the server runs Windows or Linux OS.
With APIs, the connections are automated; in effect, APIs help to transform chores that humans once carried out manually into tasks that machines complete automatically.
Provisioning and configuration management have already started with a significant shift towards the API automation model and will ramp up in the future. Prime examples of this are Chef, Puppet, and Saltstack.
Algorithmic IT Operations
In the modern digital business, data has to become the currency that drives ITOM. It is thus hardly surprising that algorithmic IT Operations (AIOps) is emerging as a prominent trend. Gartner estimates 25% of global enterprises to strategically implement an AIOps platform, supporting two or more major IT operations functions by 2019.
Earlier, the use of big data and analytics in ITOM was limited to merely data-centric monitoring and analysis - referred to as “IT operations analytics” (ITOA). However, analytics is now steadily evolving towards driving all decisions and actions more precisely. From recommending the best possible actions to triggering those actions in an automated fashion, and even predicting customer preferences to drive a more engaging customer experience, AIOps promises to reshape ITOM completely.
For instance, application performance metrics, if processed in the right way, can not only identify when a server is down but also cause automated decision-making to support decisive action.
Growing Adoption of IoT
Gartner predicts that 20.8 billion connected things will be in use worldwide by 2020.
The growing integration of IoT products into business operations would mean the use of more and more competing platforms and ecosystems, different protocol standards, as well as startling network complexities - factors presenting new challenges from an ITOM perspective.
To meet the colossal demand for greater data volumes and more IP addresses, IT operations teams will need to handle capacity-related issues such as network and IP address management. Also vital will be the need to ensure more effective collaboration with business teams to figure out how these now-connected devices will tie-in to business operations and business models.
Considering the potential for several billions of connected devices in just the next few years, the infrastructure needed to support IoT must be evaluated carefully.
The IT Operations space has never been more dynamic and exciting. To achieve the goal of seamless deployment, players will increasingly focus on integrated ITOM, which involves integration between the myriad phases and interrelated processes of IT operations and service management. With such new trends starting to break out of an emerging state, we will witness an interesting interplay within the many disciplines that constitute IT Operations and Service Management (ITOSM) over the next few years.
Published at DZone with permission of Krittika Banerjee . See the original article here.
Opinions expressed by DZone contributors are their own.