DZone
Big Data Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Big Data Zone > What Is Apache HCatalog?

What Is Apache HCatalog?

Getting ready to dive into HCatalog? This overview of the ever-useful HCatalog storage management layer for Hadoop briefs you on what it does and how it works.

Saurabh Chhajed user avatar by
Saurabh Chhajed
·
Oct. 20, 15 · Big Data Zone · Review
Like (9)
Save
Tweet
9.25K Views

Join the DZone community and get the full member experience.

Join For Free

What is HCatalog?

Apache HCatalog is a Storage Management Layer for Hadoop that helps users of different data processing tools in the Hadoop ecosystem—like Hive, Pig and MapReduce—to easily read and write data from the cluster. HCatalog enables a relational view of data from RCFile format, Parquet, ORC files, or Sequence files stored on HDFS. It also exposes a REST API to external systems to access metadata.

Capture

HCatalog Functions

Apache HCatalog provides the following benefits:

  • Frees the user from having to know where the data is stored (with the table abstraction)
  • Enables notifications of data availability
  • Provides visibility for data cleaning and archiving tools

How It Works?

HCatalog supports reading and writing files in any format for which a Hive SerDe (serializer-deserializer) can be written. By default, HCatalog supports RCFile, Parquet, ORCFile CSV, JSON, and SequenceFile formats. To use a custom format, you must provide the InputFormat, OutputFormat, and SerDe.

HCatalog is built on top of the Hive metastore and incorporates components from the Hive DDL. HCatalog provides read and write interfaces for Pig and MapReduce and uses Hive’s command line interface for issuing data definition and metadata exploration commands. It also presents a REST interface to allow external tools access to Hive DDL (Data Definition Language) operations, such as “create table” and “describe table.”

HCatalog presents a relational view of data. Data is stored in tables and these tables can be placed into databases. Tables can also be partitioned on one or more keys. For a given value of a key (or set of keys) there will be one partition that contains all rows with that value (or set of values).

To see, how HCatalog can be used with Pig visit here.

Database

Published at DZone with permission of Saurabh Chhajed, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Getting Started Building on the NEAR Network with Infura
  • Troubleshooting HTTP 502 Bad Gateway in AWS EBS
  • JVM C1, C2 Compiler Thread: High CPU Consumption?
  • Easily Format Markdown Files in VS Code

Comments

Big Data Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo