How to Overcome Big Data Analytics Limitations With Hadoop

DZone 's Guide to

How to Overcome Big Data Analytics Limitations With Hadoop

Are you considering Hadoop? Read on to find out how it may be able to help you overcome your big data analytics challenges.

· Big Data Zone ·
Free Resource

Hadoop is an open source project that was developed by Apache back in 2011. The initial version had a variety of bugs, so a more stable version was introduced in August. Hadoop is a great tool for big data analytics because it is highly scalable, flexible, and cost-effective.

However, there are also some challenges big data analytics professionals need to be aware of. The good news is that new SQL tools are available, which can overcome them.

What Are the Benefits of Hadoop for Big Data Storage and Predictive Analytics?

Hadoop is a very scalable system that allows you to store multi-terabyte files across multiple servers. Here are some benefits of this big data storage and analytics platform.

Low Failure Rate

The data is replicated on every machine, which makes Hadoop a great option for backing up large files. Every time a dataset is copied to a node, it is replicated on other nodes in the same data cluster. Since it is backed up across so many nodes, there is a very small probability that the data will be permanently altered or destroyed.


Hadoop is one of the most cost-effective big data analytics and storage solutions. According to research from Cloudera, it is possible to store data for a fraction of the costs of other big data storage methods.

"If you look at network storage, it's not unreasonable to think of a number on the order of about $5,000 per terabyte," said Zedlewski, Charles Zedlewski, VP of product at Cloudera. "Sometimes it goes much higher than that. If you look at databases, data marts, data warehouses, and the hardware that supports them, it's not uncommon to talk about numbers more like $10,000 or $15,000 a terabyte."


Hadoop is a very flexible solution. You can easily add an extract structured and unstructured data sets with SQL. 

This is particularly valuable in the healthcare industry, because healthcare providers need to constantly update patient records. According to a report from Dezyre, IT firms that offer Sage Support to healthcare providers are already using Hadoop for genomics, cancer treatment and monitoring patient vitals.


Hadoop is highly scalable because it can store many terabytes of data. It can also simultaneously run thousands of data nodes.

Challenges Utilizing SQL for Hadoop and Big Data Analytics

Hadoop is very versatile because it is compatible with SQL. You can use a variety of SQL methods to extract and big data stored with Hadoop. If you are proficient with SQL, Hadoop is probably the best big data analytics solution you can use.

However, you will probably need a sophisticated SQL engine to extract data from Hadoop. A few open-source solutions were released over the past year.

Apache Hive was the first SQL engine for extracting data sets from Hadoop. It had three primary functions:

  • Running data queries
  • Summarizing data
  • Big data analytics

This application will automatically translate SQL queries into Hadoop MapReduce jobs. It overcame many of the challenges big data analytics professionals faced trying to run queries on their own. Unfortunately, the Apache Hive wiki admits that there is usually a time delay with Apache Hive, which is correlated with the size of the data cluster.

“Hive is not designed for OLTP workloads and does not offer real-time queries or row-level updates. It is best used for batch jobs over large sets of append-only data (like web logs)."

The time delay is more noticeable with large data sets, which means it is less feasible for more scalable projects that require data to be analyzed in real-time.

A number of new solutions have been developed over the last year. These SQL engines are more appropriate for scalable projects. These solutions include:

  • CitusDB
  • DataTorrent
  • Concurrent Lingual
  • Spark
  • Hadapt
  • InfiniDB
  • Storm
  • JethroData
  • MammothDB
  • MemSQL
  • Pivotal HawQ
  • Apache Drill
  • Cloudera Impala
  • Progress DataDirect
  • ScleraDB
  • Simba
  • Splice Machine

Rick van der Lans reports that many of these solutions have valuable features that Apache Hive lacks. One of these features is polyglot persistence, which means that they can data across their own databases, as well as access the data stored on Hadoop. A number of these applications can also be used for real-time big data analytics. InfoWorld reports that Spark, Storm, and DataTorrent are the three leading solutions for real-time big data analytics on Hadoop.

“Real-time processing of streaming data in Hadoop typically comes down to choosing between two projects: Storm or Spark. But a third contender, which has been open-sourced from a formerly commercial-only offering, is about to enter the race, and like those components, it may have a future outside of Hadoop.”

John Bertero, Vice President of MAPR states that Hadoop is also shaping the gaming industry, which has become very dependent on big data. Bertero states that companies like Bet Bonus Code will need to use Hadoop to extract large quantities of data to meet the ever growing expectations of their customers. “The increase in video game sales also means a dramatic surge in the amount of data that is generated from these games.”

If you are using Hadoop for big data analytics, it is important to choose one of the more advanced SQL engines.

big data ,analytics ,hadoop

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}