The Open Data Center Alliance (ODCA) is holding its Forecast event in San Francisco in June, and I’ve been invited to moderate the panel discussing Virtual Machine Interoperability. As moderator, I’ll be far more interested in facilitating insights from panel and audience than in wittering on about what I think, so I wanted to use this blog post to begin getting some of the issues clear in my mind. What is VM interoperability, and why does it matter?
From time to time, I write about Open Data. This has nothing to do with that. The Open Data Center Alliance is interested in data centres, not data. The Alliance was established back in 2010 with Intel driving things forward, and now claims over 300 member organisations, including the likes of BMW, Lockheed Martin, Microsoft, Deutsche Bank and Marriott Hotels.
According to the ODCA website,
we came together to deliver a unified voice for emerging data center and cloud computing requirements. Our mission is to speed the migration to cloud computing by enabling the solution and service ecosystem to address IT requirements with the highest level of interoperability and standards.
Much of the Alliance’s work involves identifying customer requirements and capturing these in a series of usage models. In theory, prospective customers can modify these usage models in defining their own requirements, and suppliers can tap new business by aligning their offerings to the models. I’ve not seen much evidence that this is happening at scale yet, but the Alliance site does state that
we anticipate quick industry response to the requirements, initial POCs of solutions beginning in 2012. This could accelerate over $50 billion in cloud service investments and is expected to save $25 billion through IT operational efficiency due to cloud adoption.
One of those usage models is concerned with Virtual Machine Interoperability in a Hybrid Cloud Environment (pdf), and last month it was augmented by the release of aProof of Concept document (pdf) which
outlines testing criteria and procedures for documenting how hypervisor and VM solutions from both ODCA members and non-members interoperated in real-world enterprise cloud scenarios.
Quoted in an April press release to mark publication of the PoC document, ODCA executive director Marvin Wheeler commented that
true interoperability of hypervisor and virtual machine solutions between clouds is critical to advancing enterprise ready cloud implementations. We encourage global IT managers and cloud solution and service providers to join us at the Forecast 2013 VM interoperability panel on June 17 to take part in the collaborative discussions that will help drive the next phase of VM interoperability in the enterprise cloud.
The basic premise is a simple one; a virtual machine started on one hypervisor or class of physical server should run equally well when moved to run on a different hypervisor or physical server. Further, there is a presumption that there is a credible business requirement for this capability. As enterprise users of cloud increasingly find themselves adopting a hybrid approach spanning their own data centre, co-location facilities, hosting sites and diverse public clouds, the likelihood of them needing to run their standard Windows or Linux virtual machines on top of more than one hypervisor certainly increases. It’s less clear, though, that an inability to reliably move a Linux VM from the KVM hypervisor in your own data centre to the Xen hypervisor at a cloud provider is causing significant business pain today; it’s often reasonably straightforward, for example, to simply select a different cloud provider able to support your chosen KVM hypervisor.
But even if a lack of VM interoperability isn’t presenting an insurmountable barrier to business right now, it’s still one more thing to think about when pulling a set of disparate services together. If we can cost-effectively and pragmatically remove or diminish its ability to complicate, then that’s presumably a good thing.
ODCA’s PoC work took the usage requirements the organisation had already defined, and applied them to a specific set of documented tests. As IDG’s Joab Jackson notes in his InfoWorld piece, the results were not great.
Running through all the different possible combinations of hypervisors and OSes, the researchers found that 13 test cases resulted in warnings, and 19 test cases failed entirely. Only in two cases did the VM work flawlessly across two different hypervisors. In both of these cases, a VM created with Xen worked without troubles on a Microsoft hyper-V environment — in one case running Ubuntu and in the other case running Windows Server.
The researchers do note that they set the bar for success pretty high, and that several of the ‘warning’ states would still result in a functional VM. It does appear clear, though, that customers with a real need for multi-VM interoperability face significant challenges in efficiently managing workloads across different virtualisation infrastructures.
And that brings us to the panel at Forecast, which can hopefully help quantify the true scale of the problem… and indicate some of the ways in which vendors are working to fix the broken bits. In terms of the panel discussion itself, I look forward to delving into at least the following:
- how big a problem is the current state of VM interoperability?
- do the ODCA VM Interoperability use cases prioritise the right things?
- what do customers really need, and are they getting it?
- how are vendors really responding to requirements such as those defined by ODCA?
If you have opinions to share, perspectives to offer, facts with which to illuminate, or questions to ask, I do hope you’ll join us.
ODCA has given me a couple of free passes to Forecast. If you want to come alongand we know one another, then please do get in touch.
Note: ODCA is covering my flight to San Francisco, and accommodation during the event.