David Linthicum recently wrote an article in InfoWorld offering 3 tips to successfully migrating data to the cloud. It offers some good advice, but one question I’d raise is why this migration has to be to a public cloud when the use of private cloud technologies can offer many of the same benefits while mitigating some of the challenges that David warns about.
Before getting into the tips, David asserts his basic premise for why this migration makes sense. He states that data management costs can represent as much as 70% of IT spending, making big data and transactional data potential killer applications for the public cloud by dramatically reducing those expenditures. While this is true, in the right situation, the benefits of private cloud can be even greater and easier to realize. To see how this is possible, let’s look at the tips that David provides and how they translate into private cloud environments.
The first tip is to "consider the data volume early in the process." As he points out, you cannot simply upload terabytes and petabytes to public clouds or put another way:
Clearly moving large amounts of data from data centers— where it is captured to public clouds where it can be processed using cloud resources can be costly and time-consuming. Therefore, as David rightly points out, if you want to take advantage of this approach, you need to be smart about it.
Of course, if you can get the economic benefits of the cloud while having those on-demand resources right next to the enterprise applications that are generating all that data, why wouldn’t you do that? Private clouds can allow you to do exactly this.
Another way to look at this is through the concept of "data gravity" which Matt Aslett of 451 Research has described as a way of thinking about cloud adoption patterns. Effectively, there is a gravitational pull that naturally brings the processing closer to where data is created or resides.
Going back to David Linthicum, his second tip is "don’t always look for the same database in the cloud as you have on-premises." On this point, I wholeheartedly agree. In fact, I recently wrote this SD Times article challenging enterprise developers to embrace polyglot persistence.
One of the challenges of this approach in the private cloud is that it is difficult to find expertise in a large assortment of data management technologies. This is where a DBaaS platform like OpenStack Trove comes in. By providing a consistent way for IT staff to provide operational support for a diverse set of databases, from SQL to NoSQL to data warehouses, OpenStack Trove substantially eases this burden and makes it possible for developers to get self-service access to a managed database instance that is well-suited for any particular data management challenge.
David’s third tip is that "security and governance are systemic to everything: your data, applications, network, and storage." Once again, this is a very important consideration, and, yet again, a place where the private cloud can deliver a lot of value.
While public clouds certainly offer tools to ensure compliance with corporate policies and regulations, these tools are often implemented by application developers and not IT staff familiar with specific requirements. In a private cloud implementation, policies can be established and enforced at a corporate level that developers can take advantage of, ensuring consistent compliance across the organization.
In addition, the use of private clouds can provide ultimate control over where databases and the data that they house, reside. This can be important when, for example, patient data is required to stay within a particular location or where there are tax or privacy implications of data crossing geographic borders.
Of course, managing a private cloud isn’t for everyone. An organization needs to be large enough to get some of the benefits of scale and shared utilization that a cloud can provide. For those that fit that profile, however, the benefits can be significant.