Over a million developers have joined DZone.

Dependency Management and Organic IT Integrations

DZone's Guide to

Dependency Management and Organic IT Integrations

· Integration Zone
Free Resource

Share, secure, distribute, control, and monetize your APIs with the platform built with performance, time-to-value, and growth in mind. Free 90-day trial of 3Scale by Red Hat

If the future of IT is about integrated infrastructure, where will this integration take place? Most people will naturally tend to integrate systems and tools that occupy adjacent spaces in common workflows. That is to say that where two systems must interact (typically through some manual intervention), integration will take place. If left unattended, integration will grow up organically out of the infrastructure.

But is organic growth ideally suited for creating a sustainable infrastructure?

A with B with C

In the most basic sense, integration will tend to occur at system boundaries. If A and B share a boundary in some workflow, then integrating A with B makes perfect sense. And if B and C share a boundary in a different (or even further down the same) workflow, then it makes equal sense to integrate B with C.

In less abstract terms, if you use a monitoring application to detect warning conditions on the network, then integrating the monitoring application and the network makes good sense. If that system then flags issues that trigger some troubleshooting process, then integrating the tools with your help desk ticketing system might make sense to automatically open up trouble tickets as issues arise.

In doing so, you end up with A being integrated with B, which is in turn integrated with C. For workflows that span all three infrastructure elements, the integration simplifies the operator’s role and reduces the mean time to insight for new issues.

Organic growth of automation

Over time, your integrations will expand. On day one of your integration journey, perhaps you only have A and B. In the fullness of time though, you expand and have A with B with C with D with E. As you look to increase levels of automation, you will look to integrate between all the various systems that make up a workflow.

In the early days, the workflows will be fairly small and well-contained. They make the easiest initial targets for automation because the scope is well understood and the complexity of the interactions is self-contained by the systems that are involved in a particular workflow. But as you get more adept at managing automation, you will naturally progress to more complex workflows that span more systems and require more integration.

Unintended consequences: daisy chaining

As your workflow expands to include more integrated elements, you run the risk of daisy chaining integrations. Where your infrastructure previously existed in silos that were manually integrated through human intervention, you now have a set of components that are all unified under a single (or small number of) workflow. This is a good thing in that what used to take forever to complete can now be done in moments and with very little interaction.

However, in creating integrated workflows that span multiple elements in the infrastructure, you have essentially daisy chained your systems. If you want to replace a system in the middle, you now have to consider how the introduction of a new or different platform will impact the rest of the integrated workflow. If your workflows are large, it could mean that the integration dependency chain is prohibitively complex to manage.

At best, this makes planning for the end of life of individual components challenging. At worst, it can mean dependencies so complex that you are effectively locked into a single vendor or very narrow set of components across the whole of your infrastructure.

Hub-and-spoke integrations

To achieve a highly-integrated infrastructure with workflows that span multiple devices, you have to find a way to integrate systems without unnecessarily creating these daisy chains that make replacement or evolution difficult. One way to do this is to handle integration slightly differently.

Rather than integrate everything with everything else directly, the introduction of an additional layer creates a buffer between the systems and the integrations themselves. One way to imagine this is as a hub-and-spoke integration model (as opposed to the daisy chain model). In a hub-and-spoke architecture, individual systems are integrated with the central service engine, through which they are integrated together.

This model does two things architecturally: it provides a place where data (the currency of integration) can be normalized, and it creates a level of abstraction so that individual systems can be replaced without disrupting the balance of the integrations for a given workflow. If you decide to replace B, for instance, you replace B and integrate B’ with the central service engine.

Addressing concerns

Of course this also creates a central point of integration, which makes integration faster but also creates a single point of failure and a potential choke point. The key to addressing the first is to keep the integration point lightweight and simple. It need only be a message bus that facilitates communication between other components.

On the second concern, the objective cannot be to integrate everything with everything. Rather, the integration architecture needs to support a pub/sub model that allows individual elements to communicate with only those elements to which they must be integrated. But how do you determine what communicates with what? The same workflow analysis behind more organic growth is sufficient.

The bottom line

Architecture is about planning. We go through explicit architecture phases in design when we build infrastructure. Why would we expect it to be any different with integrations? The answer is that we shouldn’t. In the same way that you plan out how your compute, networking, and storage systems will evolve, so too should you explicitly plan out your integration architecture. The consequences of failing to plan can be painful and costly, and frequently not discovered until you are under duress for other reasons. And solving architecture under duress is never a good thing.

[Today’s fun fact: The chicken is the closest living relative of the Tyrannosaurus Rex. Remember that the next time someone calls you chicken for being afraid of something, and grasp them with your tiny arms before biting them to bits.]

Explore the core elements of owning an API strategy and best practices for effective API programs. Download the API Owner's Manual, brought to you by 3Scale by Red Hat


Published at DZone with permission of Mike Bushong, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}