Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Basics of Building Cloud Applications

DZone's Guide to

The Basics of Building Cloud Applications

An overview of what to consider when building a cloud-native app, as well as the benefits of building for the cloud in mind.

· Cloud Zone
Free Resource

Are you joining the containers revolution? Start leveraging container management using Platform9's ultimate guide to Kubernetes deployment.

Designing and developing applications for the Cloud requires a different approach than traditional, “real-machine” systems. And this change in approach requires discussions about what design options available. This is not to say that a traditional development approach will not work in a Cloud environment, but to fully leverage the Cloud as a platform (rather than as an infrastructure), each application must consider Cloud-specific architecture to get the full benefit. While there are a lot of factors to think about, there is a certain peace of mind that you get because you have to deal with fewer IT-related tasks.

There are many benefits of applications made specifically for the Cloud, one of them is how your application scales in response to user load (increasing or decreasing). With traditional hardware-based applications, IT staff would add more power to existing servers in response to an increase in active users. IT personnel would identify the limiting factor of each machine, purchase additional/more powerful hardware, wait until a time when the server could be down, and then install the equipment. This model is typically slow and expensive and can takes weeks of planning for even a simple server upgrade. Moreover, this approach doesn’t allow for a scale-down. That means the servers are always ready to handle the maximum predicted spike at all times, so the same operating costs occur on the slowest day of the year as it does on the busiest.

Luckily, it is possible to avoid all of that and “virtualize” existing servers in the Cloud. This is referred to Infrastructure as a Service (IaaS). This approach is recommended when scaling up and down is required but the existing application does not support horizontal scalability (adding more servers rather than making your existing server larger). As a result, this approach is very common when deciding to move to the Cloud, but IaaS does not provide the all the benefits the Cloud has to offer. Applications using IaaS typically require the same effort from the network/server administration and are treated like physical machines by most IT staff. IaaS applications do benefit from easier and quicker modification to server power.

However, when an application is designed specifically for the Cloud, the application can be run without much intervention or maintenance. This is referred to Platform as a Service (PaaS) and is where the Cloud really starts to shine. With a well-configured Cloud application, scaling is automatic and is appropriately sized for the number of active users. Additionally, instead of scaling the entire application, sections of the application scale independently providing more processing power only where needed. Another great benefit of PaaS is the peace of mind you get because the Cloud eliminates periodic IT maintenance. This reduces the overall cost of supporting the application servers as well as providing a reliable solution that will handle the needs of its users, even if your application becomes popular overnight.

Once you decide to use the Cloud as the platform, there are some core concepts that provide a good start for developing scalable Cloud-centric applications.

Partitioning

Partitioning in the Cloud typically refers to the separation of application sections into clusters amongst different servers. Partitioning is most commonly applied to data storage, but can also apply to other items like background processing servers. This helps your application to run faster because the server isn’t getting bogged down with data.

Separation of Functionality

Cloud systems have the ability to scale sub-sections of the application, as long as the sub-sections are appropriately defined. For example, an application that sends emails to its users will benefit from having the email engine as a separate entity from the website used by administrators. This allows the email system to scale up only when emails are being sent but not the administration website as it is irrelevant to the workload at hand.

Asynchronous Threading

Asynchronous threading refers to concepts that are intended to create faster load times, better scalability options, and significantly cheaper hosting costs. We need to know what this list is about before we list it.

  • Offloading actions to background servers to keep workload on edge servers (web/API servers) streamlined. This also helps separate functions to different sub-sections for independent scaling.
  • Asynchronous threading is best used when accessing network or data storage. Performing network calls with an asynchronous threading model allows for better processor utilization, which directly translates to more active users per virtualized server. Network access cost is significantly higher on Cloud environments over locally optimized networks. 

Stateless Execution

Stateless execution makes sure that no one server is storing all of the user info, because if you replace that server or it fails, you’re running the risk of a user losing all of their data. Essentially, it is not putting all of your eggs in one basket. Stateless execution is a major component of optimized load balancing. An application that doesn’t support stateless execution will have issues when scaling the application up/down or during automated server maintenance. Special care must be taken to ensure that these issues are avoided:

  • Do not store information on local hard drives
  • Ensure user sessions operate across servers
  • Ensure caching plan operates across servers (also usually implies centralized caching, such as using Redis cache

Those were just a few of the elements that you need to keep in mind when developing an application in the Cloud, but designing and developing applications for the Cloud, as we continue to see throughout the industry, requires a different and more creative approach. These elements can serve as functional starting point considerations.

Using Containers? Read our Kubernetes Comparison eBook to learn the positives and negatives of Kubernetes, Mesos, Docker Swarm and EC2 Container Services.

Topics:
cloud ,application architecture ,application development ,partitioning

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}