What the Cloud Means for Internet Infrastructure
What the Cloud Means for Internet Infrastructure
As the IT world embraces mobility and the cloud, dev and companies need to consider the uses of the latest tech in load balancing, integration, security, and performance.
Join the DZone community and get the full member experience.Join For Free
Today’s business environment is mobile and cloud-centric. Organizations that fail to migrate to the appropriate form of cloud computing are likely to lose competitive advantage. This requires a huge change in architecture and must be managed with great forethought and care.
Though cloud migration has become a business necessity for most, storing data on someone else’s server has proven to be a hindrance for many organizations and industries. Storing vacation photos in Dropbox is one thing; storing personal, medical or financial records or transactions on a public domain is an entirely different proposition. As we know, any organization that manages, stores or transmits this type of data must comply with government and association compliance directives that specify how to handle the security of customers’ data.
That is why some industries maintain their unwillingness to move sensitive data to the cloud. According to RightScale, the use of cloud computing is widespread and growing. The report states that more than 75 percent of companies surveyed were using cloud computing in some form. Even more interesting is that cloud users are running applications in an average of 1.5 public clouds and 1.7 private clouds while leveraging six clouds on average. Overall, 95 percent of all 2016 respondents are using some type of cloud computing set-up.
Cloud and Mobility Bring Transformation
The rise of the IoT and the rapid proliferation of mobility have made the cloud much more than a nice-to-have tool. To understand the increasing need for online/cloud tools, it’s important to understand the underlying software that makes digital transformation what it has become today.
The original model of software delivery involved an organization buying a number of enterprise application seat licenses and installing a client on each machine in the company. These licenses often came bundled with hefty support contracts, increasing the cost of the product.
Software as a Service (SaaS) replaced this outdated model as access to the internet became available commercially. This made it possible for software developers to move their products to an online environment where businesses and consumers could download software from the cloud. As this new delivery method grew in popularity, it provided the means to effectively move the software away from the customer site to the developer site where it could be managed, updated, distributed and controlled by the application creators as needed.
This method of delivery offered developers a way to release software more quickly and efficiently, add features and updates as needed and deliver cybersecurity patches on the fly. The cloud provided the mechanism by which an entire industry could change its distribution model. It also paved the road for a major change in how software organizations developed their product, whereby they could move away from the traditional, time-consuming and painful waterfall development cycle to the agile approach.
From SaaS sprang the concepts of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS), which have come into vogue as well. Core systems that underpin an organization’s combined SaaS footprint and online presence can today be outsourced to firms that specialize in a given technology or protocol. Efficiencies are gained by spreading the cost out across many different customers.
The firms rendering the service, platform or technology mitigate risk by means of powerful Service Level Agreements (SLAs). These SLA agreements serve to hold the vendor true to their word that their product or service will always be available and operate within a certain window of expected performance. The cost benefits are significant, and they are increased when more physical or logical capacity is ordered, as the costs are incremental based on usage instead of the capital expenditure associated with traditional scaling tactics.
By lowering costs and improving efficiency, these “as a service” offerings certainly increased efficiency and brought the age of mobility and software delivery to new heights. However, they also created a host of new challenges due in large part to the aged internet infrastructure.
Re-Imagining Internet Infrastructure
As the internet grew and became more complex, much of its original underlying technology was expected to keep up, but it just wasn’t designed for these expectations. One of the key elements of this vast global infrastructure is the Domain Name System (DNS), developed early on in the internet’s history so that people could get to the website they needed without memorizing the string of numbers that made up its IP address. DNS has become the gateway to almost every application and website on the internet.
Traditional DNS approaches are inadequate for the increased complexity of today’s internet. Next-generation solutions are available today that allow businesses to enact traffic management in ways that were previously impossible, including:
Automatically adjusting traffic flow in real time to network endpoints, based on telemetry coming from endpoints or applications. This can help prevent overloading a datacenter without taking it offline entirely and seamlessly route users to the next nearest datacenter with excess capacity.
Geofencing can appropriately partition data to conform with regulations. It ensures users in the EU are only serviced by EU datacenters, for instance, while ASN fencing can make sure all users on China Telecom are served by Chinacache.
Monitoring endpoints from the perspective of the end user, with the ability to send requests coming from each network to the endpoint that will service them best.
Handling planned or unplanned traffic spikes by using scalable infrastructure.
Creating business rules that use filters with weights, priorities and even stickiness to meet your applications’ needs.
It is also becoming clear that DNS can represent a single point of failure if there is no backup. It’s becoming more common for enterprises to deploy redundant DNS to mitigate this risk of major downtime. Managed DNS is also on the rise, in which customers benefit from a provider’s globally anycasted DNS networks to achieve maximum reliability and fast performance.
With the move toward digital transformation comes the decision point of continuing to support legacy infrastructure or replacing it. The capital and operational expenses needed to replace hardware and software can exceed the costs of replacing systems with a cloud solution.
When considering migrating to the cloud, organizations should also think about what new software solutions exist to help them safely achieve their goals – including those that keep digital assets safe by granting access to internet infrastructure that was never before possible. Organizations can now distribute their resources through redundant DNS, meaning that systems stay current and stay up.
Opinions expressed by DZone contributors are their own.