Over a million developers have joined DZone.

7 Rules for Hybrid Cloud Architectures

Learn about the seven rules to consider when adopting hybrid cloud architectures.

· Cloud Zone

Build fast, scale big with MongoDB Atlas, a hosted service for the leading NoSQL database on AWS. Try it now! Brought to you in partnership with MongoDB.

[This article by JP Morgenthal comes to you from the DZone Guide to Cloud Development - 2015 Edition. For more information—including in-depth articles from industry experts, best solutions for PaaS, iPaaS, IaaS, and MBaaS, and more—click the link below to download your free copy of the guide.]

Research firm IDC predicts that by 2016 more than 65% of enterprise IT will commit to hybrid cloud technologies. IDC is not alone in predicting a high rate of enterprise adoption for hybrid cloud computing, Gartner Group also has predicted that 50% of enterprises will have hybrid cloud deployments by 2017, and DZone’s 2015 Cloud Development Survey revealed that 50% of their audience is currently using hybrid cloud technologies.

Hybrid cloud architecture is often described as ‘private and public clouds sharing resources’. But in reality, many hybrid architectures merely leverage public cloud resources in tandem with privately- hosted applications. For example, a business might capture, aggregate and analyze data from multiple external sources on a public cloud and then pass those results to an application running in a privately-hosted environment. Or a business might put its public web presence in a public cloud but keep the data for that application in a privately-hosted environment.

This public/private cooperation introduces potential complexities. This article will present seven rules to consider when adopting hybrid cloud architectures.

Rule 1: You Are Extending Your Operational Footprint 

One of the most critical things to keep in mind when deploying a hybrid cloud architecture is that you are extending your operational footprint. While this may seem obvious, it may have a real impact on IT operations and development organizations. Consider the following questions:

  • Which team is responsible for the components that are running in the public cloud?
  • Is your IT operations team prepared to manage another platform?
  • Can your current monitoring and operations tools be used with the public cloud provider?
  • How is this architecture going to impact calls to the service desk?
  • What is the plan for network interruption between public cloud and data center?

As you can see, hybrid architectures may require you to hire individuals that have the right skills to operate on the public cloud platform selected. Likewise, it may require acquisition of new monitoring tools if the current tools do not support the cloud platform selected. Some organizations may choose to use a “NoOps” style of approach in which the development team is responsible for the application components that operate on the public cloud — an approach that removes the burden from IT operations, but leads to the need to define new operational processes for the business.

The key takeaway here is that the decision to go hybrid may seem simple because of the nature of on-demand computing, but should not be taken lightly by the business.

Rule 2: Don't Share Your Network With YouTube and Facebook

Make sure you work with your network operations team to identify networking requirements surrounding your hybrid applications. Sometimes the only collaboration that occurs is that development requests that a port be opened to the public Internet so that the privately-hosted application can receive requests from public cloud applications.

If network operations is not informed about the need for the connections to and from your public cloud to have a guaranteed quality-of-service, it is possible that your application traffic will be sharing bandwidth with fellow employees watching cat videos on Facebook. Your network operations team needs to be an integral part of the team deploying your hybrid cloud application so that they can focus on appropriate connectivity and traffic shaping.

Rule 3: Avoid Reverse Data Gravity

Simply stated, “data gravity” is the theory that processes should migrate to the data given the relative mass of each. That is, processes are fairly lightweight to move when you consider that they operate on terabytes or petabytes of data. However, hybrid cloud architectures sometimes suffer from “reverse data gravity” because using a hybrid cloud means keeping (some) data on privately-hosted environments. That is, the processes hosted in the public cloud draw data from the private side. This is an architectural choice that limits risk, but at some point the business needs to identify what is an acceptable quantity of data that can be moved before reaching a point of diminishing returns. Further impacting reverse data gravity is the choice to pull data through services or directly connect to data sources on the private side. The latter will most likely result in higher quantities of data being transferred.

Additionally, since you now have application components running on the public cloud, data is being generated on both public and private clouds. Specifically, logs are being generated by activities on the public cloud — which means new data to be managed and analyzed.

Rule 4: Trust Only Goes So Far

This is also known as the Captain Obvious rule. But the truth is that sometimes businesses make poor choices in favor of simplicity. For example, some businesses have actually stored private keys on the private cloud that enable connectivity back to the privately-hosted environment. This opens up significant risk of a breach of the private environment. If keys are needed, they should be stored encrypted in a third-party repository and pulled by the process when needed and erased when no longer required. This separation of concerns significantly increases the attack surface.

Here are some additional best practice recommendations:

  • Limit the number of endpoints that can establish secured connections with the data center
  • Limit access from Internet-enabled servers in the public cloud
  • Traffic leaving secured areas on public cloud should go through Network Address

Note that the NAT server offers a higher degree of security by limiting access into private network areas on the public cloud. But this is also a traffic bottleneck and single point of failure, so high availability and load balancing solutions may be required to obtain the necessary performance.

Rule 5: Application Redesign May Offer Better Performance

If we consider the traditional three-tier web application as a good model for hybrid architectures, there is a natural inclination to have the web interface on the public side and the application server and database on the private side. However, this architecture may not offer the best economics or performance for the application.

Sometimes it is advantageous to redesign the application to take better advantage of public cloud services while still offering the benefits of hybrid cloud architecture. As depicted in Figure 1: moving left to right, we can see various parts of the application start to shift to the public cloud. The first transition illustrates splitting up the application server so that some services may be running local to the web server while others are running local to the database, which is a good trade-off if there is a division of compute functions and data management functions.

The far right example represents moving everything but the data onto the public side, but perhaps using caching techniques or NoSQL databases to store data temporarily so that compute functions can respond more effectively. These modifications have the effect of reducing latency, being more responsive to user interactions, and limiting the amount of data that needs to be transferred back to the private side.


Rule 6: Don't Treat Public Cloud Like Another Data Center 

It’s human nature to revert to what we know when in unchartered or unfamiliar territories. For operations personnel, even if they have training on the cloud platform, they are going to be most familiar with how they’ve run things for the past few years of their careers. This means overprovisioning, which in cloud speak means subscribing to more virtual machines and storage than you really need. It could also mean that they choose to deploy and manage all the software components versus using services from the public cloud provider, such as relational database and queuing services.

The value of the public cloud is more than just availability of resources on demand. Public clouds also let you obtain these resources at a good economic value. Taking advantage of things like discount policies for reservations is a better alternative than acquiring everything on-demand. Using tools and services that provide high availability and failover options can be much more effective, and less costly, than building in these capabilities manually. Since these options don’t exist in the privately-hosted environment, operations is often forced into a bimodal situation requiring different approaches to managing resources. For many, the duality can be disconcerting and disorienting.

Rule 7: Test, Test & Then, Test Again

Testing your hybrid cloud architecture requires an understanding of common issues for building distributed applications. Networks have gotten so reliable that sometimes we take for granted that our packets will arrive at their destination. However, as reliable as networks have gotten, transmission errors still occur due to hardware failures or increased traffic. The key is to develop the appropriate resiliency into your application in face of these failures.

Application developers have spent years dealing with reliability issues within in the data center, which can rely on fiber optic connections and high-speed connectivity. When we move to a hybrid architecture, we often are communicating over slower connections and sharing bandwidth with other applications. Hence, we need some additional testing to identify how our application will respond.

Here’s a list of additional tests we recommend for hybrid applications:

  • Network failure testing
  • Increased latency testing
  • VM server failure testing
  • Invalid message testing (for services)
  • Authorization testing
  • Authentication testing

Conclusions

As I like to say: “Hybrid cloud architectures are easy ... until they’re not.” A lot of hidden complexities are involved in managing an enterprise-scale, robust, hybrid cloud deployment. Methodologies
for building applications that operate in silos within a data center and those that cross geographic boundaries over common Wide-Area Networks cannot be expected to behave identically nor can they be operated in the same manner. Following the advice put forth in these seven rules, I hope, will help you deliver resilient hybrid applications. 

DOWNLOAD YOUR FREE COPY TODAY

Now it's easier than ever to get started with MongoDB, the database that allows startups and enterprises alike to rapidly build planet-scale apps. Introducing MongoDB Atlas, the official hosted service for the database on AWS. Try it now! Brought to you in partnership with MongoDB.

Topics:
cloud ,infrastructure ,hybrid cloud ,architecture

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}