Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Data Analysis Using Apache Hive and Apache Pig

DZone's Guide to

Data Analysis Using Apache Hive and Apache Pig

Learn about loading and storing data using Hive, an open-source data warehouse system, and Pig, which can be used for the ETL data pipeline and iterative processing.

· Big Data Zone ·
Free Resource

The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.

Apache Hive, an open-source data warehouse system, is used with Apache Pig for loading and transforming unstructured, structured, or semi-structured data for data analysis and getting better business insights. Pig, a standard ETL scripting language, is used to export and import data into Apache Hive and to process a large number of datasets. Pig can be used for the ETL data pipeline and iterative processing.

In this blog, let's discuss loading and storing data in Hive with Pig Relation using HCatalog.

Prerequisites

Download and configure the following:

Use Case

In this blog, let's discuss the below use case:

  • Loading unstructured data into Hive.
  • Processing, transforming, and analyzing data in Pig.
  • Loading structured data into a different table in Hive using Pig.

Data Description

Two cricket data files with Indian Premier League data from 2008 to 2016 is used as a data source. The files are as follows:

  • matches.csv: Provides details about each match played.
  • deliveries.csv: Provides details about consolidated deliveries of all the matches.

These files are extracted and loaded into Hive. The data is further processed, transformed, and analyzed to get the winner for each season and the top five batsmen with the maximum run in each season and overall season.

Synopsis

  • Create database and database tables in Hive.
  • Import data into Hive tables.
  • Call Hive SQL in Shell script.
  • View database architecture.
  • Load and store Hive data into Pig relation.
  • Call Pig script in Shell Script.
  • Apply pivot concept in Hive SQL.
  • View output.

Creating Database and Database Tables in Hive

To create databases and database tables in Hive, save the below query as a SQL file (database_table_creation.sql):select

Importing Data Into Hive Tables

To load data from both the CSV files into Hive, save the below query as a SQL file (data_loading.sql):select

Calling Hive SQL in Shell Script

To automatically create databases and database tables and to import data into Hive, call both the SQL files (database_table_creation.sql and data_loading.sql) using Shell Script.select

Viewing Database Architecture

The database schema and tables created are as follows:select

The raw matches.csv file loaded into Hive schema (ipl_stats.matches) is as follows:

select

The raw deliveries.csv file loaded into Hive schema (ipl_stats.deliveries) is as follows:select

Loading and Storing Hive Data Into Pig Relation

To load and store data from Hive into Pig relation and to perform data processing and transformation, save the below script as Pig file (most_run.pig):select

Note: Create a Hive table before calling Pig file. To write back the processed data into Hive, save the below script as a SQL file (most_run.sql):

select

Calling Pig Script in Shell Script

To automate ETL process, call files (most_run.pig, most_run.sql) using Shell script.select

The data loaded into Hive using Pig script is as follows:

select

Applying Pivot Concept in Hive SQL

As the data loaded into Hive is in rows, the SQL pivot concept is used to convert rows into columns for more data clarity and for gaining better insights. The user-defined aggregation function (UDAF) technique is used to perform pivot in Hive. In this use case, the pivot concept is applied to season and run rows alone.

To use  Collect UDAF, add Brickhouse JAR file into Hive class path.

The top five most run scored batsmen data for each season before applying pivot is shown as follows:

select

The top five most run scored batsmen data for each season after applying pivot is shown as follows:

select

Viewing Output

Let's view winners of a season, the top five most run scored batsmen, 

Viewing Winners of a Season

To view winners of each season, use the following Hive SQL query:select

Viewing Top 5 Most Run Scored Batsmen

To view top five most run scored batsmen, use the following Hive SQL query:select

The top five most run scored batsmen are shown graphically using MS Excel as follows:

select

Viewing Year-Wise Runs of Top 5 Batsmen

To view year-wise runs of the top five batsmen, use the following Hive SQL query:select

The year-wise runs of the top five batsmen are shown graphically using MS Excel as follows:

select

Reference

Managing data at scale doesn’t have to be hard. Find out how the completely free, open source HPCC Systems platform makes it easier to update, easier to program, easier to integrate data, and easier to manage clusters. Download and get started today.

Topics:
apache hive ,apache pig ,data analysis ,etl ,big data ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}