Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Is the Iterative Approach Ideal for Big Data Systems?

DZone's Guide to

Is the Iterative Approach Ideal for Big Data Systems?

Big Data is really hot, but is the iterative approach best for Big Data systems? Here's a look at the benefits of iterative approaches, like flexibility, scalability, and performance.

· Big Data Zone
Free Resource

See how the beta release of Kubernetes on DC/OS 1.10 delivers the most robust platform for building & operating data-intensive, containerized apps. Register now for tech preview.

Big data systems are complex, to say the least, due to the sheer volume, velocity and variety of data that needs to be processed to get the right business insights. Further, the storage of these data brings with it additional complications such as security and governance.

To handle all these different dimensions, a sound software architecture is the key. Though there are many choices, a software architecture based on a highly iterative approach offers many advantages to programmers, over traditional approaches like the waterfall model.

Below are some reasons why an iterative approach is well-suited for big data systems.

Flexibility

Being an integral aspect of agile methodologies, the biggest advantage with the iterative approach is that it gives programmers the flexibility to adapt to the changing requirements better, and makes it convenient for the entire team to validate requirements and design at every stage of development.

Such flexibility ensures that programmers can make modifications as needed, in an efficient and cost-effective manner, before the code becomes too complex for changes. Take the online catalogue, Solicitors.guru for instance. It’s a large big data system and in building and maintaining it, the iterative approach gives programmers the option to address requirements in small batches, and enables the delivery of functional systems in batches, typically at the end of every cycle.

The obvious advantage of this approach is that programmers only have to handle a small part of big data's complexity at a time, and they have the chance to revisit the problem during every iteration to make necessary changes periodically.

Answering the Unknown

Unlike conventional systems, big data systems do not start with a problem statement because the problem is too broad and the scope is unknown. A good example of such a problem is measuring consumer sentiment and the factors causing it, so that the businesses can address these sentiments for higher marketability.

The scope and answer to this problem is unknown, and the business has to work with programmers to identify the best approach to finding the right answers. When programmers have to answer such complex problems, they are better-off doing it in small batches. In iterative modeling, the first step is called Iteration Zero, and this is where the foundations for the project are laid.

The solution is then evolved from one iteration to the next, based on the changing business priorities and solutions of the previous iterations. In fact, priorities are redefined at the beginning of every iteration cycle, and the position is evaluated at the end of it, to ensure that the team is on the right track.

Such an approach works well for big data systems because the requirements can change for every iteration, as more insights are obtained in the previous ones. With my experience as an Android and iOS app creator, I know this is true. 

Scalability and Performance

One of the biggest challenges faced by programmers is the building of scalable and high-performance big data systems that can process vast amounts of data in astronomical speeds. According to Ben Lee, co-founder of Neon Roots and creator of product development workshop called Rootstrap, “Most big data systems are implemented on cloud platforms because it enables team collaboration when doing deployments. Creating and making changes to such a complex system needs to be iterative to get the best possible solutions.”

To conclude, big data systems are best driven by an iterative approach spread over a certain period, as it gives businesses and programmers the chance to evaluate their approach and requirements from time to time.

Since big data solutions start with a clean slate, it becomes imperative to approach the problem in small steps, that are revisited again before the next step is taken.Moreover, programmers get good flexibility over deliverables, and this helps them to stay on top of the business and requirement changes throughout the entire project. For these reasons, an iterative approach is here to stay as we develop complex big data systems to gather hitherto unknown business intelligence.

New Mesosphere DC/OS 1.10: Production-proven reliability, security & scalability for fast-data, modern apps. Register now for a live demo.

Topics:
big data ,software architecture

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}