6 Reasons to Use Apache Hadoop, Spark, and Hive in the Cloud
Cloud computing enables new levels of business agility for IT, developers, and data scientists. Performing data processing on cloud platforms can be very effective.
Join the DZone community and get the full member experience.
Join For FreeWe at Hortonworks have spent countless hours working with customers as they use Apache Hadoop, Spark, and Hive in the cloud to help them better leverage the cloud platforms they use for these data processing workloads. In the interest of community and sharing, wanted to share some of the top reasons we’ve heard. Enjoy!
Cloud computing enables new levels of business agility for IT, developers, and data scientists while providing a pay-as-you-go model with unlimited scale and no upfront hardware costs. Performing data processing on cloud platforms (for example, from Microsoft Azure and Amazon Web Services) is especially effective for ephemeral use cases where you want to spin up analytic jobs, get the results and then bring down the cluster so you can manage your costs.
1. Scale to Your Needs
Historically, as the analytics demand grew within enterprises, a need to expand the capacity of the Hadoop clusters emerged. Typically, the time required for hardware to become available could take days or weeks, limiting innovation and growth. Using Apache Hadoop, Spark, and Hive in the cloud enables growth of data processing power in real-time. Just flip a switch and new machines come online and get to work. This creates an overall data processing solution that scales rapidly to meet growing business needs.
2. Reduce Your Costs for Innovation
For companies getting started with Big Data analytics and processing in the cloud (such as Apache Hadoop, Apache Spark, and Apache Hive), the low capacity investment — without the need for extensive administration and without the upfront hardware costs — makes perfect sense. Businesses are already realizing the value of the cloud for quick, one-time use cases involving Big Data computation, allowing them to improve business agility and gain insight with nearly instant access to hardware and data processing resources.
3. Pay for What You Need
Undoubtedly, the cloud is beneficial for running ephemeral use cases where you want to spin up a job, get the results and shut things down (and stop the spending meter). Since the cloud is flexible and scales fast, you pay only for the compute and storage you use, when you use it.
4. Use the Right Machine for the Right Job
Some data-processing jobs require more compute resources, some require more memory, while others require a lot of I/O bandwidth. With the cloud, you can easily use the right machine for the right job. Cloud solutions offer a choice to the end user to provision clusters with different types of machines for different types of workloads. Intuitively, the cloud provides a flexible solution for the problem of managing variable resource requirements.
5. Draw Insights From Data Where It Naturally Lives
As businesses start using more and more devices “on the edge”, the data generated from these devices starts living in the cloud. As analytics thrive on data (typically large volumes of it), it makes a lot of sense for the Big Data analytics processing platform to exist in the cloud with the data.
6. Simplify Your Operations
Using the cloud, one can provision different types of clusters with various characteristics and configurations, each suitable for a particular set of jobs. This frees the IT administrators from having to manage multiple clusters or intricate policies for a multi-tenant environment.
Published at DZone with permission of Roni Fontaine, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments