Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Moving to Parallel Programming-Based Development

DZone's Guide to

Moving to Parallel Programming-Based Development

Learn how a request is processed in parallel programming and how to approach programming tasks.

· DevOps Zone ·
Free Resource

In response to accelerated release cycles, a new set of testing capabilities is now required to deliver quality at speed. This is why there is a shake-up in the testing tools landscape—and a new leader has emerged in the just released Gartner Magic Quadrant for Software Test Automation.

We mainly work in development where a method is calling some method and so on. This creates a long, long chain of work.

Normal Flow: How a Request Is Processed

Consider a case where someone is looking to submit a request for processing. One way is to do all the processing in a single thread

Starting from processing the input data to valid dto --> DTO to retrieve some data from the database --> Retrieve data from a cache --> Do further processing to conclude the results --> Post the process to map to the output required by the user --> Publish data/Send back response.

The Issue With This Flow

Now, in this process, if we find one of the tasks is taking too much time, due to that, all threads trying to work on this chain will take that much time. At most, we can say that different threads will work on different requests, and hence it will work in a great way. But if we look further, each such task took a lot of time in waiting.

Alternate Approach (Using In-Memory Queues)

Think of the division of this task chain as some event based parallel processing.

In the above example, we can have the following flow:

  1. A request comes in --> Validation and adds to a queue which will process this data --> Responds with a unique ID.

  2. In the listener to the above queue, this input data is transformed in the relevant DTO with modification and is added to another queue for further processing.

  3. In the listener to the above queue, the retrieved data from the database is added for further processing to the next queue.

  4. In the listener to the above queue, retrieve the data from the cache and process further for results and add the result to the next queue.

  5. The listener of the above queue will process the input and persist to the database.

  6. The caller will call another API to retrieve the data.

Benefits of This Approach

In the above example, we have five different areas where we can throttle or make things work better. We can say, at any stage, increase the number of consumers of any queue depending on the time taken.

This way, where many threads were required to remain in a working state for a long duration, now we will need very few threads only for the targetted area.

Further Improvements

We can also think of adding any output to two or more queues at the same time where it will be updated in parallel.

We can also think of notifying the caller instead of giving another API.

Another Approach (Taking Each Task as an Individual Task)

We can take each process as a task (which was processing and putting data into another queue) and put each task to an executor which will process these requests.

We can also think of having two kinds of pools (computing and io).

Computing pools should ideally be working with a lesser number of threads; the number of cores while the number of threads doing I/O can be high - say, twice number of cores.

Benefits

In this way, a complex problem statement is divided into many small subtasks, many of which can run in parallel, like persisting in the database and persisting in some NoSQL database, or multiple calculations on the same data in parallel.

The last task can notify the caller of whole task chain.

How to Go With These Approaches

There are many ways we can accomplish the above approaches. You can also try using this set of jars to easily move to this paradigm.

For example, to add to a queue, you can annotate your method:

@Queued(name="myQueue")
public ObjectToBeQueued processDataToString(ComplexObject complexObject) {
  IntermediateObject obj = processor.process(complexObject);
  return createDataToBeQueued(obj);
}
@Consumer(name="myQueue")
public void processItem(ObjectToBeQueued obj) {
  //processing logic
}


Gartner: Digital Transformation, DevOps, and the Future of Testing. Download Now! 

Topics:
devops ,tutorial ,parallel programming

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}