Cloud Application Design Pattern

DZone 's Guide to

Cloud Application Design Pattern

Take a look at how this team took advantage of the cloud infrastructure to build an application designed for the cloud, and the steps they took.

· Cloud Zone ·
Free Resource

The best weapon we have with the rise of cloud computing is the on-demand scalability. Most of the time when we talk about cloud, we are not talking about enabling applications for cloud adoption but only about cloud infrastructure management.

With my experience in making the applications take advantage of the cloud infrastructure and tools, I came up with the design pattern to help teams achieve it.

I am going to talk about my experience in making an application cloud-enabled and the use of the design principles. We were working on an application and the initial version was ready to pilot test in the real situation. Unfortunatxely, when we had a pilot run, the application crashed within minutes. We followed the basic principles of application development. We had SOA architecture, the application was designed in modules, a three-tier architecture and wherever necessary we had jobs running in the background. Retrospection showed that the real reason for failure was that we could not scale.

Our biggest task was to make the application scalable to leverage the cloud infrastructure. We came up with an architecture pattern and turned in around quickly.

Below are the key components of the application design pattern in the sequence in which they need to be implemented.


Primarily, your application should support the concurrency. It actually means that your application design should enable processing mutually exclusive tasks independently. It helps us use the CPU time slicing to the best. What it actually means is that first, we have to use each CPU to the best probable capacity.

Process contains thread and threads are lightweight processes. 

We have to be very careful in identifying in our application architecture, which can be a process as each process is allocated resources in memory for its execution. Whereas each thread in the process shares the memory and resources allocated to the process.

In Java, the concurrency framework is there to help. It primarily consists of two components, processes and threads. The "java.util.concurrent" package offers improved support for concurrency compared to the direct usage of Threads.

Interesting Fact: Google had to decide how to handle that separation of tasks. They chose to run each browser window in Chrome as a separate process rather than a thread or many threads, as is common with other browsers. Doing that brought Google a number of benefits. Running each window as a process protects the overall application from bugs and glitches in the rendering engine and restricts access from each rendering engine process to others and to the rest of the system. Isolating a JavaScript program in a process prevents it from running away with too much CPU time and memory and making the entire browser non-responsive.

What We Did

We created separate processes for "Crawler," "Data Processing" and "Best Match" on each URL, as these were completely independent pieces of puzzle. It enabled us to control the resource allocation individually. In the "Data Processing" process, each URL was processed in its individual thread.


Concurrency and parallelism

Concurrency and parallelism

It means the ability to break a task in smaller tasks to complete them faster. Divide and conquer is the best example for implementation (remember merge sort). Another excellent design example for parallel processing is Map / Reduce.

Our privilege to rely on CPU speed restricted back around ~2008 when we maxed out the capacity of number of transistors on one CPU. Then the era started for multi-cores and the need for parallelism.

An application can be parallel – but not concurrent, which means that it processes multiple sub-tasks of a task in multi-core CPU at same time.

Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.

Back then, Java did have a concurrency framework but unfortunately nothing for parallelism. Later we saw the rise of languages, which came up to manage this challenge like GoLang.

Java also came up with pipelines and streams (more of MapReduce). JDK also came up Fork and Join implementation in Concurrency framework in JDK 8.

Aggregate Operations is a good read to learn about parallelism in Java. 

Remove Contention

No matter how broad we make our highways, if we do not make flyovers to handle the inflow of traffic, we will have an overall slowness. It works the same way for a software solution.

If you have a non-scalable module in your application which all or most of the application depends on, it means you have a contention point. These need to be removed or your other aspects for scalability improvements may lose relevance as an overall effective solution. There is no silver bullet for it. It all depends on the application design and the need for the solution.

What We Did

We did a quick performance evaluation of the application and realized that even though we are implementing parallel processing, all different pieces are heavily dependent on the database. This prompted three decisions. One redesign database and de-normalize the tables based on the module's specific needs rather being a purely normalized DB. Secondly, the persistence layer needed to be scalable service to use load-balancing capabilities. And, lastly, we reduced the DB IO operation by having in-memory DB for operations (like data aggregation) which could wait for persistence for a duration of time.


Most of us know now what this means by now. The more modular we make our application design and the smaller we make our services, the more are the chances to use the cloud infrastructure to scale our application at run time.

What We Did

As you have seen in previous steps, each step contributed to making the services modular. The more formalized version is microservices architecture in today’s world. We used the 'Decomposed by sub-domain' pattern to redesign some pieces of application.

Hardware Scaling

The last step is now to leverage the hardware scaling. Cloud gives us the capability to perform hardware scaling at length. The first step is vertical scaling where we add high capacity CPU and with concurrency implemented, we are ready to use it to our advantage. The second step is more vertical scale by adding multi-core processing power. With parallelism, we are ready to use the added horsepower.

The next step includes horizontal scaling. We add load-balanced machines based on the firepower need for individual components of the application like DB, persistence, analytics, etc. Since our application is modular and micro, we are all set to take advantage.

Steps to cloud enablement

Steps to cloud enablement


As per my learning, if you really need to leverage cloud for limitless scale, please follow the steps in order to refactor your application then world is yours.

cloud ,cloud adoption ,cloud architecture ,cloud native ,cloud scaling ,concurrency ,design development ,parallelism ,scalabiility

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}