Ushering in the New Era of Enterprise Integration: Data Virtualization
Data is the currency of digital transformation, and data-driven application integration is the future of enterprise architecture.
Join the DZone community and get the full member experience.Join For Free
The world of enterprise and application architecture is growing increasingly complex, and organizations need a solution that simplifies their landscape while ensuring high levels of data security and governance. To become a successful digital business, you need to select a data and application integration strategy that enables you to unify all your company's data assets and analyze them in the context of your businesses bigger picture.
Data is the currency of digital transformation, and data-driven application integration is the future of enterprise architecture. Organizations are re-architecting and re-imagining their integration strategy with a one-to-many hub model that places the data you care about - Virtual Data Resources - at the center.
In this two-part blog series, we'll take a deeper look into this shifting approach to enterprise integration. First up, how organizations are moving towards a Virtualized RESTful Resource Model:
Point-to-point integration models, which originated back in the 80's, worked well when a few vendors like Microsoft, Oracle, and SAP ruled the earth with monolithic application suites. Point-to-point ESBs could connect the few applications that companies used to run their internal business. Ultimately, these integrations were simple and predictable.
However, over the next few decades, there has been a major shift in the application landscape - the average enterprise uses more than a thousand applications, and those are just the ones IT knows of. Small and mid-size companies are using a handful to hundreds of applications to run their business. Entire industries such as banking, retail, healthcare are being "unbundled" into a myriad of fragmented software applications. Each unbundled industry builds more apps and publishes more APIs and each new app is another island of data.
With this rapid proliferation of applications used in the enterprise comes a big challenge for IT to centrally master data. There is no longer just one answer to what a "Account" object looks like. Each department, line of business owner, and end users is customizing the data models in their SaaS apps to capture the data that's unique to their business unit and departments. You can't duplicate data, and even the concept of the "golden record" is quickly fading away because of the limitations imposed in a Master Data Management (MDM) system.
This means the physical structure of your data now exists across a broad set of applications within your enterprise and in the SaaS apps used by your ecosystem of users. This approach to integration requires that you learn the structure of your data at each individual endpoint. You integrate by becoming an expert with the object models of dozens, hundreds and thousands of objects at each endpoint. The problem? Simply put, this doesn't scale. We believe that your developers shouldn't have to be experts in each applications data model. Even when an integration vendor provides templates and mapping intelligence, you're still operating at a point-to-point perspective dependent upon the data model at each endpoint.
Enter Data Virtualization
Data virtualization applies an abstraction layer separating the logical view of data from the physical representation. Your company's data is physically stored across dozens, hundreds or even thousands of applications and databases that may be in a variety of places including your data centers, in the cloud or in each SaaS application vendors cloud. For example, your data about revenue is in your CRM system. Your data on leads, contacts, and campaigns is in the dozens of applications used by your marketing team many of which are now in the cloud. Is there one source of truth for what a customer, employee or product object looks like?
Each application and application vendor has its own point-of-view of each data object. Data virtualization gives you the ability to establish your point-of-view of a data object as the only view that matters. By abstracting your view of the data from the underlying application's physical view you can begin to govern and manage your data regardless of each applications data structure. We call this abstraction representing your company's or applications point of view a Virtual Data Resource.
So, what is a Virtual Data Resource? Virtual Data Resources (or VDRs) put your data model at the center of your application ecosystem and enable you to manage the data you care about in the way that is best for your company. Virtual Data Resources provide a canonicalized view of your data objects while eliminating the need for point-to-point mapping of data between each application. With a virtualized, canonicalized view you are in charge of your data and can govern data as RESTful Resources that can access the data from any endpoint. You define your structure and make every application you integrate with look like your desired data structure through transformations.
Point-to-point integration models don't virtualize the data objects. They connect data objects into an application network without a shared or common representation of that object. Each endpoint's data structure is mapped to every other endpoint that shares that object or resource.
With point-to-point models, you will have to develop and maintain n-1 mappings for each new application you buy. We call this the integration hairball. When does mapping become the integration hairball that can no longer be managed? How do keep up when the new applications being used by your enterprise are multiplying faster than you ever imagined?
Tying It Together
The hyper-proliferation of applications and APIs has strained traditional point-to-point integration approaches. The physical representation of data needs to be abstracted in order for companies to manage their data across the hundreds and even thousands of applications in their app ecosystem. Data virtualization will become a key component of both API management and integration disciplines in order to bring order to the fragmentation and proliferation of data across an enterprise.
Make sure to keep an eye out for Part 2 in this blog series. In the meantime, gain more insights in our latest whitepaper on how enterprises are shifting to a model that places the data they care about at the center of their integration strategy through data virtualization. Get your copy of The Future of Enterprise Integration: Data Virtualization here.
Published at DZone with permission of Ross Garrett, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Automating the Migration From JS to TS for the ZK Framework
Never Use Credentials in a CI/CD Pipeline Again
5 Key Concepts for MQTT Broker in Sparkplug Specification
Seven Steps To Deploy Kedro Pipelines on Amazon EMR