Integration Is Still A Distributed-Systems Problem

DZone 's Guide to

Integration Is Still A Distributed-Systems Problem

Promises, promises. Today, it’s microservices. Yesterday it was SOA, ESBs, and Agile. Even before that we had integration brokers, client-server RPC (CORBA/DCOM/EJBs), EAI hubs, etc. But no silver bullet will eliminate all integration headaches.

· Integration Zone ·
Free Resource

This article is featured in the DZone Guide to Enterprise Integration, 2015 Edition. Get your free copy for more insightful articles, industry statistics, and more.

Integration is not a sexy job, but someone has to do it. Whether you’re a small startup or an ageless, large corporation, chances are high that you’ll need to integrate systems as part of an overall solution. On top of that, integration solutions that we have to provide these days are becoming more and more complex. We have to think about Big Data, IoT, Mobile, SaaS, cloud, containers—the list goes on endlessly. IT is being asked to deliver solutions smarter, faster, and more efficiently.All the while the business is in a competitive environment that literally changes every day as technology changes, and we’re being strangled by domain complexity. Rest assured, however; we’ve been promised solutions.

Today, it’s microservices. Yesterday it was SOA, ESBs, and Agile. Even before that we had integration brokers, client-server RPC (CORBA/DCOM/EJBs), EAI hubs, etc. Even though we’ve known that none of these are silver bullets, we latched onto these movements and rode them to glory. Or at least we thought.

I spend a lot of time talking with organizations about architecture, technology, open source, and how integration can help extract more value from their existing enterprise systems, as well as position them for future flexibility and adaptivity. Invariably, the discussion makes its way to microservices. This isn’t surprising, since every person in the world with a computer seems to either blog or speak about them . “Will microservices help me?” I’ve been asked this question quite a few times over the past few months. There seems to be a general assumption that if we just do X because “they” are successful and are doing X, we’ll be successful. So let’s change things from SOAP to REST, application servers to Dropwizard, VMs to containers, and we’ll be doing Microservices. And Microservices will help us move faster, scale to unicorn levels, be lightweight and agile, and be the cure-all we know doesn’t exist but still hope that this time we’ve found it.

The answer to the question is “you’re asking the wrong question.” Adrian Cockcroft recently said: “people copy the process they see, but that is an artifact of a system that is evolving fast.” You’re not going to be a connected, adaptive, agile company delivering IT solutions the way company X does just because you copy what company X “appears" to do today. There are years of evolution, culture, and failures that led to what Company X does today and will do tomorrow. Unfortunately, you cannot skip those parts by sprinkling some technology makeup into your organization and IT systems.

So what can we do? We want to be an adaptive, agile organization and we need to integrate existing technology to align with it, but that’s a bit harder than just choosing a “no-app-server” strategy.

Believe it or not, whether you call it microservices, SOA, client/server, etc., we are still tasked with getting systems to work together that may not have been designed to work together. To do this successfully, we have to focus deeply on the fundamental principles that will help us build agile, distributed systems and not get too caught up in hype. Domain modeling, boundaries, runtime dependencies, loose coupling, and culture/organizational structure are all prerequisites to building an adaptive organization, even in the face of constantly changing business and technological landscapes. Integration is a distributed-systems problem, so let’s focus on some of the building blocks of successful distributed systems.

Domain Modeling

As developers, we are intrigued (or downright giddy) with technology. With new technology unfolding every day, it’s not hard to understand why. However, for most systems, the technology is not the complicated part: the domain is. Most developers I’ve spoken with aren’t interested in becoming domain experts to facilitate building better software. It’s not exactly a highly sought-after skill, either (search most job boards… do you see domain knowledge or domain modeling high up on the list of requisites?). But domain modeling is a powerful and underused technique that is a vital prerequisite to building integrations and applications. We use models in our daily lives to simplify otherwise complex environments. For example, we use the GPS on our phones for navigating a city, but the map on our phones is a model geared toward accomplishing that goal. Using GPS and maps on our phones may not be a helpful model for navigating a battlefield. Understanding the nuances of the domain and where to elucidate ambiguities is key to iterating on your understanding of what the software should do and how to model it in your code. Each discussion with a domain expert should be reflected in the code so that they evolve together. Once you’re able to uncover the right model for the purpose of the software, you’re ready to explore the right boundaries.


We deal with a lot of systems when we set out to build integrations, often times with systems that were never designed to talk to each other. Those boundaries are fairly straightforward. But there are nuances in a domain that can cause ripple effects if not accounted for and designed explicitly with boundaries. For example, when I go to place an order on Amazon.com or Walmart.com, I can add things to my shopping cart and checkout. I will be prompted for payment information and delivery information and submit my order. So we could capture this as an “Order” in the domain and carry on. However, there’s an obvious difference between how I place orders with these websites and, say, how a large corporation would place orders. Company A could place an order for 10,000 widgets from one of these online retailers, but they probably won’t do it through the website the way I do; they’ll most likely submit a purchase order. The process for placing an order for them is quite different. You can even throw in customer status (Gold, Silver, Bronze, etc.) as classifications that may impact how an order is placed and received in the system. If these concepts are not modeled directly, you can end up with a “canonical model,” which becomes the source of constant conflict when changes are needed (or understandings of the model becomes clearer). Once you’ve established seams or boundaries around your models, you must think about how to expose them to collaborating agents. In other words, you must think about your integration relationships and how those are expressed.

APIs and Modularity

Modularity isn’t a new concept. Still, it seems difficult enough to get right that people bend (or blend) architectures and seams so they don’t have to deal with modularity outright. Oh you have an order system over there? Go ahead and share these implementations of Order objects. No. Stop sharing domain code thinking it will save you time (or whatever your justification is), and focus instead on designing modules that hide their implementation details and expose only certain concepts over APIs or contracts. We want modules to be independent insofar as they can change their implementation details without affecting other modules. That’s the goal. But it seems we get too preoccupied with defining the right API up front and focusing on WSDL-like contracts. Just like domain models and boundaries, APIs also evolve (like they must); you won’t ever arrive at the right API “up front."

Runtime Coupling

When we think of coupling, we tend to think of technological coupling. Let’s not use Java RMI, because that’s a specific platform; Let’s use XML or JSON instead. Or let’s not inherit from dependency injection containers, because that ties us to the dependency injection framework (just in case we want to change that in the future). These are all noble goals, but in my experience, we get overly focused on design-time or technology-specific coupling and forget about much bigger coupling phenomena. For example, the fallacies of distributed computing still hold here. The network is not a reliable transport. Our service collaborators may NOT receive our requests. They may not even be available. When you start to think “entity services," “activity services,” “orchestration services,” and have large chains of synchronous calls between services—turtles all the way down—you can start to see how this may break down. By designing proper models, boundaries, and APIs, we are aiming toward autonomously deployed and managed systems. But if you have large chains of synchronous calls like this, you’ve most likely got some bad runtime coupling: making changes to a service necessitates changes to other services you collaborate with (in a ripple effect), and if services are not available you run the risk of outages, etc. Asynchronous, publish-subscribe style architectures can alleviate some of this coupling.

Conway’s Law

In 1968, Melvin Conway wrote "organizations which design systems... are constrained to produce designs which are copies of the communication structures of these organizations,” and he couldn’t be more correct. Large organizations have been designed from the top down for one thing: efficiency. Following reductionist “scientific” management theories since the early 1900s, our companies have been focused on reducing variability and optimizing each part of the organization. Which is why, not surprisingly, in IT organizations we have “DBAs” and “UI experts” and “QA teams.” Each team focuses on its area of specialization with scrutiny and optimization. Then, following Conway’s Law, we have three-tier applications—or “layers”—that correspond with each of those teams: the UI layer, the Business Logic layer, the Database Layer, and so on. Then we throw things over the fence to Ops and QA, etc. Although this may be the hardest thing to evolve, the organizational structure and the culture of its employees has the most profound implication on how we build our distributed systems. If we want to be an adaptive, connected company, we need to explore what that means to our organizational management philosophies and structures. Then, as a corollary, we have a much better chance at building truly autonomous, decoupled systems that can scale individually, adapt to failures and adverse conditions, and change to meet market challenges.

Without these fundamentals at the forefront, we run the risk of rabidly adopting the latest and greatest fads and choking our businesses in the area they need to be most adaptive: IT and technology.

For more insights on microservices, cloud solutions, and more integration trends, get your free copy of the DZone Guide to Enterprise Integration, 2015 Edition!

big data, cloud, enterprise integration, integration, mobile

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}