Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Hive Database: A Basic Introduction

DZone's Guide to

Hive Database: A Basic Introduction

An introduction to the architectural details for Hive and how it interacts with Hadoop.

· Big Data Zone
Free Resource

Learn best practices according to DataOps. Download the free O'Reilly eBook on building a modern Big Data platform.

What Is Hive?

Hive is a data warehouse infrastructure tool that processes structured data in Hadoop. It resides on top of Hadoop to summarize Big Data and makes querying and analyzing easy.

Why Use Hive?

  • Most data warehousing applications work with SQL-based querying language. Hive supports easy portability of SQL-based applications to Hadoop.

  • Faster results for even the most tremendous datasets.

  • As data volume and variety increase, more machines can be added without any corresponding reduction in the performance

Features of Hive

  • It accelerates queries as it provides indexes, including bitmap indexes.

  • It stores metadata, which reduces the time to perform semantic checks during query execution.

  • It provides built-in functions to manipulate dates, strings, and other data-mining tools.

  • It supports different file formats like Avro Files, ORC Files, Parquet, etc.

Architecture of Hive

Major components of hive are as follows

Metastore

This component is responsible for storing all the structure information of the various tables and partitions in the warehouse, including column and column type information, serializers, and de-serializers necessary to read and write data and the corresponding HDFS files where the data is to be stored.

Driver

It acts like a controller that receives the HiveQL statements and starts the execution of statements by creating sessions and monitors the life cycle and progress of the execution. It stores the necessary metadata generated during the execution of an HiveQL statement. The driver also acts as a collection point of data or query result obtained after the Reduce operation.

Compiler

The component that parses the query, does semantic analysis on the different query blocks and query expressions, and eventually, generates an execution plan with the help of the table and partition metadata looked up from the Metastore.

In other words, the process can be described by the following flow:

Parser —> Semantic Analyser —> Logical Plan Generator —> Query Plan Generator.

Optimizer

Performs various transformations on the execution plan to get an optimized DAG (directed acyclic graph).

Executor

After compilation and optimization, the executor executes the tasks according to the DAG. It interacts with the job tracker of Hadoop to schedule tasks to be run. It takes care of pipelining the tasks by making sure that a task with dependency gets executed only if all other prerequisites are run.

CLI, UI, and Thrift Server

Interface that can be used by the users to submit queries and get the result.

How Does Hive Interact With Hadoop?

Execute Query

From the Hive interface (UI or Command Line), the query is sent to the driver for execution.

Check Syntax and Get Plan

The driver takes the help of query compiler, which parses a query to check the syntax and query plan or requirement of a query.

Get Metadata

The compiler sends metadata request to Metastore and in return, Metastore sends metadata as a response to the compiler.

Execute Plan

The compiler checks the requirement and resends the plan to the driver. The driver sends the execute plan to the execution engine.

Execute Job

An execution engine such as Tez or MapReduce executes the compiled query. The resource manager, YARN, allocates resources for applications across the cluster. The execution engine sends the job to JobTracker, which is in the Name node, and it assigns this job to TaskTracker, which is in the Data node. Here, the query executes the MapReduce job.

Fetch Query Result

The execution engine receives the result from data nodes and sends it to the driver, which returns the result to the Hive interface over a JDBC/ODBC connection.

After knowing all these internal details about the execution of hive queries, I hope you will find it more interesting to work on.

References

Find the perfect platform for a scalable self-service model to manage Big Data workloads in the Cloud. Download the free O'Reilly eBook to learn more.

Topics:
hive ,hadoop ,big data

Published at DZone with permission of Sangeeta Gulia, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}