Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Don’t Reinvent the Wheel With Integration Design: Top 3 Best Practices Exposed

DZone's Guide to

Don’t Reinvent the Wheel With Integration Design: Top 3 Best Practices Exposed

In this post, we take a look at three best practices for middleware integration. After all, there's no point in engineering everything from scratch.

Free Resource

Modernize your application architectures with microservices and APIs with best practices from this free virtual summit series. Brought to you in partnership with CA Technologies.

design

The API middleware integration industry is definitely not short of architectural principles like top-down strategic initiatives like digital transformation and legacy modernization as well as developer-centric ones like event-driven architecture, microservices, and DevOps. But beyond that and onto the fun and games of actual delivery of services to enterprise customers, there are trivial intricacies for architects and project team to solve. These challenges aren’t customer-specific; these are everyday nuances in the middleware integration space.

What can be interesting in the middleware integration professional services space is that when faced with a design challenge, we also have to consider pragmatic factors like customers' organizational capabilities and processes – how mature it is and the groups involved, not only in the software development lifecycle (SDLC) but also during the ideation and maintenance period. To be able to build a future-proof implementation, we also have to observe and consider the historical movement of customer’s business product evolution. These are some design cases where best practices can be challenged, where practicality and even experience-based intuition plays a role in decision making.

Case 1: How Micro Can a Microservice Be?

For one of our customers, we built separate ESB applications for frontend/producer interfaces, consumer interfaces, and processes as service orchestrators in order to maximize reuse and minimize deployment for new business products and even product changes. Focusing only on the process layer, we have a design decision to make. Option 1 is a modularly built single application per business product with the product variances inside the application as functions. With Option 2, we have to break down the functions and variances inside that product making them an independent application. It's more granular and smaller than Option 1.

Option 1: single mother process per service

option-2-more-granular-service

Option 2: More granular service

Purely on an architectural best practice standpoint, the most likely choice is Option 2 to maximize reuse. What we learned in our microservices journey is that it doesn’t only benefit fast-paced organizations; it also benefits slower ones because it’s less risky to deploy an independent application, lesser request for a transition between Dev and Ops. But here’s the thing: what about the practical point of development maintainability and cost-savings? More application consumes more server load, which wouldn’t help the customer’s cost for a license if the total application footprint would take more server cores.

More application runtime means more overhead, like calling the same libraries and dependencies and more serialization and marshaling. We’re faced with the debate of reusability versus performance. Further, for customer’s maintainability, segregating to this level of granularity could be too much for them to swallow. Isn’t a modularly built application enough? Is breaking down further to more runtime apps needed? Bear in mind that there are a lot more other services in the actual project for them to understand and maintain – service frameworks around security, logging, notification, metadata audit, etc. Yes, Option 2 would ‘eventually’ be easier for the customer’s development team to maintain in the long run once they get the hang of it, but this is not the case in the early transition stages. The learning curve and operational readiness of the customer is a crucial factor to consider. Clearly, the decision here is closer to call than initially thought.

Case 2: Designing the Middleware’s Agnostic Schema

Middleware schema is an internal schema. The nearest explanation may be found in SOA’s canonical data model/schema. This is not an endpoint payload’s schema; this is the inter-API communication schema in ESB and API middleware. We were keen to have this in the overall architecture of our implementation and not go the pass through route, because part of the customer’s requirement is to have a future-proofed integration solution, and the use case is many-to-many.

The objective is important for them not only to add inbound partners but also to freely add or replace backend system in the future without throwing what’s built in the integration platform. What this means is that there are more than one inbound format and more than one outbound format per business transaction message. The challenges are that we now need to design this schema as agnostic as possible based on product perspective, not the system, and that during that stage of the project, there’s only one inbound partner and only one backend system, i.e., an ERP backend system. That doesn’t give us leverage regarding various samples to play and compare with to compose one.

Our sensible options were:

  1. Adopt a standard design pattern like SOA’s canonical modeling.

  2. To look into the top two to five systems in this industry and study their data. We realized very early that doing these options will easily trap us into a black hole of working hours. We’re not a fan of SOA’s heavy upfront schema that would entail standardization exercise, and we don’t want the orchestration and parameter lookup design to be based on this schema as its contract.

We didn’t choose these so-called sensible options and instead went for the easy but safe route of failing fast by doing what’s below, with the thinking of then organically growing and changing the schema as the need arises. Historically, product specification changes aren’t drastic in the organization. We approached the problem by not overthinking it, and used the minimum viable product way:

  • Immersed ourselves as much as we can to the customer’s business process, focusing on data to consider, for us to convert them to fields.
  • Per business transaction message was an independent schema contrasting SOA’s canonical principle.
  • We combined the relevant inbound payload’s fields from the partner and the outbound backend system’s fields.
  • We added two dynamic fields as identifier (see highlighted fields).

This is the schema we come out with.

We tested this design by playing with possible formats from an inbound partner using SoapUI and RAML, regression test them with JMeter. We tested parallel calls, sequence numbers in different orders, indented data not in its natural location, etc., to make sure what we designed is agnostic enough. In that exercise, we made minor adjustments. And as soon as the real second inbound partner integrates, we found another set of little adjustment to our internal schema. Our plan to organically grow the schema is working, plus the fact that we chose JSON helps – dynamic key and value standards works best for optional fields. But the question remains, what if a new backend system arrives? The customer hasn’t had that scenario to date.

Case 3: Infrastructure Setup, to Expose or to Semi-Expose

It is true that an API and ESB middleware platform is just a regular application in a Unix or Windows server. So, standard enterprise best practice in firewall, server hardening, and corporate internet policies applies. It's easy to say that network design is supposedly an infrastructure-related call, but not in middleware service consulting. We are expected to have the wisdom and wider range of experience to be able to recommend the architecture to the customer even in this area. This case is about a hybrid implementation of API and ESB middleware. On a physical architecture standpoint, they have a more exposed middleware that will function as a gateway to abstract their data through APIs, then another as an internal middleware that does the traditional ESB work serving more for the sensitive mission critical LAN systems. The dilemma now lies in what layer in the network will we deploy these. There were two options.

option-1-api-gateway-in-the-cloud-on-prem-in-dmz

Option 1: API gateway in the cloud, on-prem in DMZ.

option-2-api-gateway-in-demilirized-zone-on-prem-in-prvate-lan-segment

Option 2: API gateway in DMZ, on-prem in private LAN segment.

As part of the delivery team, at face value, the upper hand belongs to Option 1. Being nearer the cloud would give us better agility because API gateway can reach any host or URL in the cloud without much security process bureaucracy. Potentially, a turn-key is also an option if they avail cloud-based IaaS like AWS EC2 or ESB product(s) that have a PaaS option. On the contrary, having them all within customer’s network means better latency to LAN and private applications and more controlled security. Here’s a breakdown of the high-level pros and cons.

Option 1

Pros: potentially better latency to internet- and cloud-based endpoints; lesser security bureaucracy; turn key for PaaS managed service.

Cons: security control isn’t within; longer latency to LAN endpoints.

Option 2

Pros: better latency to LAN endpoints; security is more controlled.

Cons: potentially longer latency to internet- and cloud-based endpoints; longer transition because of the security policy; both are internally managed.

Depending on where you’re sitting, each item can be a good thing or not. The decision thereby lies in weighing what is more important: speed or security? Another thing to consider: where are the more critical endpoints located? Are there predominantly more in the cloud or the other way around? In this case, we did go for Option 1. The factor where Broker and NoSQL database persistence endpoints of the project are in the cloud outweighs the rest. However, we can only manage what we can measure at that point. We bumped into a scenario where Option 1 isn’t a good choice. We need to preserve the inbound payload after synchronously processing a transaction for us to FTP the original payload to a file server than in the LAN where this server has no public IP address. Now, with Option 1’s setup where inbound payload was taken by the API Gateway housed in the cloud, we were forced to implement an inflexible approach to solving the FTP requirement, it would have been a cleaner approach had the API Gateway is in the DMZ. After encountering that one-off issue, we decided to revisit the setup before finalizing the decision.

The Proof of the Pudding Is in the Eating

Over the years, what we learned the hard way is that best practices are guides to help us design the implementation in the most logical and sensible way. However, in the integration space, there’s no shortage of new scenarios and a combination thereof. You’d be surprised with how many unforeseen scenarios and situations arise during delivery. At times, the practicality of the situation can sway the decision to go the other way, conflicting the initial impression. Customer’s competency maturity, internal setup, process, and historical behavior are some of the factors to be considered before finalizing a holistic design and implementation decision. Unlike in product development, services consulting to enterprise customer can have personal experience winning over statistical evidence.

What’s consistent is that an Agile approach is the more suitable delivery methodology. In middleware, more often than not, the specification will change as output arrives. The customer tends only to know what they want or elaborate them deeper upon seeing some output. Constant iteration and automated regression test suites like jUnit helps spot design decision mistakes early on. The fail fast principle allows the result to all come out in the wash so long as the defect happens within the sprint’s development stages.

The Silver Lining

There’s always two sides of the coin for most of these close-call design and architectural decision points – there’s always an upside to every downside. Especially in the API/ESB middleware integration space, there are many ways to deliver the integration goal the right way. Amongst the right ways, there are no true or false, only pros and cons trade-offs. Some would even bear fruit longer than the implementation result. What’s key in these design challenges is to do adequate due diligence like immersion or discovery exercises to be able to have ample data to arrive at an informed decision. A good way to avoid reinventing the wheel, in case the same or similar challenge arises again, is to have some form of a registry repository in our implementations, which includes architecture and these situational customer profiles to serve as a knowledge center for the team.

The Integration Zone is proudly sponsored by CA Technologies. Learn from expert microservices and API presentations at the Modernizing Application Architectures Virtual Summit Series.

Topics:
integration ,middleware ,best practices ,api ,service

Published at DZone with permission of Abraham Santiago, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}