The IT pendulum swings back and forth fairly reliably between centralized and distributed. Just when you think that everything will be distributed, we find economic reasons to centralize. And before we have moved everything to the middle, we find that there are optimizations to be made by distributing.
So as we watch the growth of the cloud giants, it’s worth understanding where we are in the current pendulum swing, and what the likely implications are to architecture and planning.
Cloud in a Box
When most people talk about cloud, they probably envision little cloud icons that are labeled AWS, Azure, GCP, Oracle, and any number of smaller players. Certainly, this is how cloud is depicted in Slideware presentations by vendors, users, conference speakers, and everyone in between.
The challenge is that this is a great way to conceptualize the players in the cloud space, but it is a lousy way to think about how a cloud is built and used.
The most useful way to think about the cloud is as a somewhat loose collection of resources on top of which workloads can be run. There is no defined requirement that those resources be constrained. They need not be located in the same datacenter. They need not be centralized.
And they need not even be from the same cloud provider.
Physics and Economics
If the cloud is really just a set of resources that are bound together by the management and orchestration of workloads, then the real question that architects have to ask is where those resources ought to be, and which resources are ideal for application workloads.
But even here, we ought not be religious. Put simply, we should be letting the economics and physics dictate where we place our workloads. And in turn, the same optimizations ought to drive our architectures and our cloud supplier decisions.
If an application is not particularly latency-sensitive, for example, it might make perfect sense to use very competitively-priced resources in a central location. Here, the economics dictate that a large pool of central resources will benefit from an economy of scale that simply doesn’t play out in enterprise-owned edge devices.
But we should not fool ourselves into thinking that economics are all that matter.
Some applications require lower latency to avoid disruption to user experience. But even for applications that are not obviously latency-sensitive, if there are a lot of remote calls, small bits of latency can add up pretty quickly. For anyone who has struggled using Slack with a poor broadband connection, this will probably resonate.
In these cases, the important metric will naturally evolve from some economic measure to proximity. How close is the resource to either the user or the data?
From Cloud to Distributed Cloud
So understanding that user experience will sometimes lean on economics and other times on physics, the notion of a cloud as a central pool will need to change. This is why you see people talking more about distributed cloud, which is basically the acknowledgment that the resources that make up a cloud can reside anywhere — from centralized datacenters to customer premises.
In a distributed cloud model, the cloud extends to include things like carrier central offices and localized datacenters. But it also includes available edge resources, especially in the case of multi-access edge computing (MEC). So a software-defined access layer, for instance, could become relevant as an extension of a distributed cloud.
Enterprise Implications of Distributed Cloud
It seems obvious that this ought to impact architectural decisions beyond simply lifting and shifting an application so that it runs in AWS or Azure.
If, for instance, an application has a natural affinity for certain data, you might want to colocate your workload with the data. If that data is in a cloud, then you might want to host the application in the same cloud. If it is not, you need to make an architectural decision about whether it is easier to move the application or the data.
Or perhaps you have an application that is particularly sensitive to latency, so you want to push the workload closer to the user. You might evolve your thinking about your managed service provider, selecting a carrier that has appropriate support in a nearby central office (think: CORD).
Or maybe you need to host the application on-premises, which means you might need to reconsider how you are designing your branch connectivity solution. It could be that policy-based management is not enough, and you need to explore something closer to edge computing, choosing a platform with onboard CPU and storage that supports Greengrass or Azure Stack.
The Bottom Line
Basically, I am suggesting that the migration to the cloud will involve much more than just an economic discussion. To be honest, I believe a cloud strategy based primarily on economic assumptions is probably not completely baked, but that is a topic for another post. But minimally, there has to be real architectural thought put into where applications will run, and that should drive planning around far more than just the cloud supplier. Everything from the connectivity service to the campus and branch becomes an important consideration.
To sum it up, a cloud strategy needs to be more than a consideration for whoever owns and maintains the servers on which an application runs. If all you are doing is changing who pays for the boxes and wires, you will likely find that your cloud strategy is wildly disappointing when all is said and done.