Avoiding the Dark Side of the Cloud: Platform Lock-In
One big problem with moving to the cloud is the ever-dreaded vendor lock-in. See where the fear comes from and browse a few solutions to avoid it.
Join the DZone community and get the full member experience.Join For Free
The rise of cloud computing is indisputable — driven primarily by the promise of agility in bringing new applications to market faster, and by more closely aligning expense with actual business usage. But moving to the cloud is not without risk. Many surveys, such as the MongoDB Cloud Brief discussed later, point at the fear of platform lock-in as one of the top inhibitors to on-going cloud adoption. Enterprises are turning to open source software to throw off the shackles of proprietary hardware and software. But they are also concerned about exposing the business to a new level of lock-in. This time from APIs and services of the cloud providers themselves.
In this blog, we explore the drivers and inhibitors of cloud adoption, as well as which factors are driving the fear of cloud lock-in. We’ll then discuss the steps users can take to get the best of both worlds – the business velocity provided by the cloud, without the risks of locking themselves into a specific vendor.
Growing Cloud Adoption
So how quickly is the cloud growing? Recent analysis of the Infrastructure-as-a-Service (IaaS) market by IDC (1) provides some interesting statistics:
- Spending on public cloud platforms is expected to reach $23 billion by the end of 2016, representing just under 20% growth over 2015.
- Private cloud spending is expected to reach $13bn over the same period, representing 10% growth.
If we contrast this with spending on “traditional” IT infrastructure, we see a forecast decline of 4.5% through 2016. Now is not a good time to be a peddler of premium IT hardware. By 2020, IDC expects total IaaS cloud spending to hit just under $60bn, making revenues (almost) as large as the traditional IT infrastructure sector.
Of course, the cloud is a natural home for startups building their businesses. I’m old enough to remember when early seed funding was dedicated purely to financing your own Sun hardware and Oracle software licenses so that you could actually demo your new concept. The thought of doing this today is laughable.
But it’s not just startups that are driving cloud growth. Research from RightScale (2) concluded that 17% of enterprises now have over 1,000 Virtual Machines (VMs) provisioned to public cloud providers, up from 13% of enterprises in 2015. Private cloud showed even stronger growth with 31% of enterprises running more than 1,000 VMs, up from 22% in 2015.
Here at MongoDB, we’ve conducted our own research as we polled over 2,000 members of the MongoDB community. This research found that 82% of respondents were strategically using or evaluating the cloud today. This, and a multitude of other fun facts and insights are available in our MongoDB 2016 Cloud Brief.
The Top Drivers and Inhibitors of Cloud Adoption
As the Cloud Brief shows, the number one driver for cloud adoption is agility — the need to rollout new applications faster. This desire was reinforced at a recent meeting I had in London with developers from a leading global financial institution. They complained it takes three months for hardware supporting a new project to be procured, installed, racked, and stacked. Clearly unacceptable in today’s hyper-competitive market governed by agile development, continuous integration, and elastic scaling.
This need for application agility was the top cited reason for cloud adoption across organizations of all sizes — from those with fewer than 50 employees to enterprises with more than 5,000. It was also the top reason for cloud adoption cited across all job titles — from the CIO through to architects, developers, and DBAs.
The Cloud Brief shows another interesting statistic. The majority of respondents use more than one cloud provider. This was primarily driven by the need to take advantage of specific features offered by one provider over another, and this clearly demonstrates the need to remain flexible in your cloud choices. Hitching your wagon to one provider could present serious competitive disadvantage if another cloud vendor introduces something that your rivals can take advantage of, but you can’t. What is that “something?” It could be a specific service or feature, region, instance type, pricing schedule, performance kick. The list goes on.
When our survey respondents evaluated the leading inhibitors of cloud usage, security and data privacy came out on top, followed closely by cloud vendor lock-in. We did see more bifurcation in the responses to this specific question:
- Security was the top inhibitor in medium and large-sized enterprises. Lock-in was the second top inhibitor.
- Lock-in was the top concern among smaller enterprises.
Company Size and the Cloud
Small organizations have increased freedom to innovate quickly and are less likely to be tied to legacy software. Maintaining maximum flexibility as they build their apps means avoiding vendor lock-in that can present restrictions on this ability to evolve.
Larger organizations are more likely to have mature contracts with software vendors and are therefore less sensitive to the loss of flexibility caused by long-term vendor agreements. Concerns over data security resonate more for large enterprises as high-profile attacks and data breaches are a substantial threat to a large brand. However, lock-in was the second top concern for these larger organizations, ahead of the technical expertise needed to run on the cloud, or concerns about maintaining performance and availability SLAs for workloads running in the cloud.
So Where Does the Fear of Lock-in Come From?
As discussed in the introduction to this post, many organizations have been burned by lock-in in the past. The use of open source software and commodity hardware has provided an escape route for many, but they have concerns that by moving to the cloud, they trade one form of lock-in for another.
What form does that lock-in take? It’s not about the hardware, operating systems, and software of the past. Instead, it’s about APIs, services, and data. The underlying IaaS components made up by compute, storage, and networking are pretty much commodity and can be exchanged between cloud providers. But as we move up the infrastructure stack, so the APIs and data these services exchange become much less portable. Specifically, we need to think about security, management, continuous integration/continuous delivery (CI/CD) pipelines, container orchestration, serverless compute fabrics, content management, search, databases, data warehouses, and analytics, to name just some of the key friction points.
And it’s those services that manage our data that cause particular concern. You may have heard of the term “data gravity.” It was (presciently) coined a few years ago, but it has a real resonance today. In the same way that as the mass of an object increases so the strength of gravitational pull against it increases as well, in the case of data gravity the more data you have in a specific location, the harder it is to move.
An article (3) in the UK’s Computing tech publication illustrates this point. Comparethemarket.com, the largest price comparison site in the UK, made the switch from managing its own on-premises infrastructure to Amazon Web Services (AWS). As a part of that move, the IT team considered the AWS DynamoDB NoSQL database service. However, concerns around exposing itself to excessive AWS control made Comparethemarket eliminate DynamoDB as an option. The company has since standardized on MongoDB as the operational database for its microservices-based architecture.
There is an important take-away in all of this:
It’s fine to date your cloud provider, but don’t marry them.
MongoDB and the Cloud
We’ve just launched our shiny new MongoDB Atlas database-as-a-service, providing all of the features of MongoDB without the operational heavy lifting required for any application.
So isn’t this new service also presenting the risk of cloud lock-in? The answer is “no,” for two important reasons.
The first is that MongoDB Atlas is designed to run on multiple public cloud platforms — so you can spin it up on your vendor of choice. It is available on AWS today, with Azure and Google Cloud Platform coming soon. Eventually, we plan to offer MongoDB Atlas across clouds, so you can stretch your MongoDB deployment across providers to take advantage of, for example, specific pricing schemes, regions, or platform features.
Secondly, MongoDB Atlas is running the same software you can download yourself from the MongoDB Download Center. This means MongoDB can run on your laptop, on your own local servers, in your chosen co-location facilities, or on your own instances on any public cloud provider.
It is quick and easy to migrate existing databases into MongoDB Atlas and to get it back out again, as we demonstrate in this MongoDB Atlas migration blog. What is also really helpful in mitigating lock-in is that if you decide you want to bring operations out of MongoDB Atlas and back under your control, it is easy to move your databases onto your own infrastructure and manage them using the MongoDB Ops Manager and MongoDB Cloud Manager tools. The user experience across MongoDB Atlas, Cloud Manager, and Ops Manager is consistent, ensuring that disruption is minimal if you decide to switch to your own infrastructure.
Figure 1: Consistent operational interface, wherever you run MongoDB
The reality is that if you try to achieve this type of flexibility with any of the public cloud vendors database services, you’ll soon hit a wall.
Published at DZone with permission of Mat Keep, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.