Meeting the Challenge of Hybrid Clouds and Overflow Data
Meeting the Challenge of Hybrid Clouds and Overflow Data
Real hybrid cloud computing means business processes connecting with on-premises systems and public IT compute and storage, preferably without having to bother (too much) about which is which.
Join the DZone community and get the full member experience.Join For Free
On the face of it, hybrid clouds are increasingly attractive for enterprises and organizations. What’s not to like about being able to tap into large amounts of IT resources when needed, and then walk away with no more to pay? Yet distributing workloads is not the whole story. Real hybrid cloud computing means business processes connecting with on-premises systems and public IT compute and storage, preferably without having to bother (too much) about which is which. That raises the prickly subject of data, which may be collected in one place, consumed in another, and stored in yet another. However, none of these locations necessarily uses the same data format as the others.
Pushing the Envelope of Data Democracy
Departments like hybrid cloud solutions because they offer freedom and immediacy of action. If your sales department can use a popular SaaS CRM solution, it doesn’t have to wait for the IT department to install and commission new servers. If accountants and project managers can access similar pay-as-you-go solutions in the cloud, their professional life may change radically with this new available-whenever-wherever tool for work. It’s no wonder the IT department starts to be bypassed – until users decide they would like to bring their cloud-generated data back to their local system, or vice versa.
Hybrid Clouds Should Solve Problems, Not Create Them
Overflow data as such really only exists in simple hybrid cloud usage, such as archiving and backup. In other cases, data transformation takes place: the data does not just overflow, but changes. Purists would argue that data storage without further processing is not even a hybrid model. Nonetheless, it is a good starting point to understand the challenges of moving data from one environment to another. In a first level, enterprises must deal with issues such as latency, security, and reliability. Higher performance links, encryption, and cloud provider vetting are, respectively, possible solutions, although they each typically involve the IT department. At the next level, where hybrid cloud computing starts in earnest, data synchronization and transformation are required. Technical solutions to these needs typically surpass the capabilities of the average user, often threatening to overload the IT department’s workforce.
Data Formats are the Most Complex Issue
The solutions to the first level of issues above (higher speed links, encryption, vendor vetting) are one-time efforts. For example, once a sufficiently high performance link is in place, the issue of latency should be resolved for at least some time to come. Synchronization and transformation of data to match the format of destination system may on the other hand require different attention. For instance, Salesforce in the cloud, an on-premises SQL database, and Dropbox in the cloud again will not use the same data formats. Data connectors must be either written or acquired to change the format appropriately, as the data circulates from one system to another.
There is no secret to the basics of productivity and profitability for an enterprise. They are a matter of doing the right things, and doing those things right. IT departments and their CIOs who know this begin by working with the other departments to understand their business needs and how these needs will best translate into IT solutions. By then enabling these other departments to self-provision cloud services they need, they let users have control over solutions to their needs and in doing do offload work from the IT department. Easy-to-use data connectors with automation capabilities speed up delivery of these services from the IT team to the departments asking for them. Taking baby steps towards full-function data transformation is possible too. Import and export of CSV files, as a lowest common denominator, can precede more sophisticated use of mapping and expressions between data stores, with the added advantage of reducing the expense of API calls to cloud applications.
Seamless Data Integration and Software Defined Everything
The kind of “cloud bursting” that enterprises and organizations want to do should be rapid and easy to accomplish. Dealing with peak or seasonal order loads and extracting insights for competitive advantage from big data are both examples of workloads that can exceed a company’s own computing capability, yet be of major or critical importance. The “single pane of glass” concept, in which an enterprise can automatically connect to the resource, local or cloud, required for a job, is still to come. However, seamless data integration with the right data connectors is already a reality. When the rest of the software-defined compute structure arrives, the data synchronization and transformation components will already be available.
Will “Born in the Cloud” Change Things?
Using the cloud as the starting point for the IT resources needed by an enterprise has also become popular, particularly for startups and other green-field cases. However, as long as free market conditions continue to exist (and why would they ever stop?), hybrid cloud computing is likely to remain a reality, this time between different cloud providers. Features and pricing differences are likely to make different offerings attractive for different company IT requirements. Efficient, reliable data connectors will have as big a role to play in any future cloud(s)-only scenario as they do today for mixed on-premises, private and public cloud computing.
Opinions expressed by DZone contributors are their own.