DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Writing Parquet Format Data to Regular Files (i.e., Not Hadoop HDFS)

Writing Parquet Format Data to Regular Files (i.e., Not Hadoop HDFS)

A software architect discusses an issues he ran into while using Hadoop HDFS and the open source project he started to address it.

Roger Voss user avatar by
Roger Voss
·
May. 22, 18 · Analysis
Like (4)
Save
Tweet
Share
23.41K Views

Join the DZone community and get the full member experience.

Join For Free

The Apache Parquet format is a compressed, efficient columnar data representation. The existing Parquet Java libraries available were developed for and within the Hadoop ecosystem. Hence there tends to a be near automatic assumption that one is working with the Hadoop distributed filesystem, HDFS.

There are situations that one might want to create Parquet-formatted data to a regular file system file - particularly if not working in a context that assumes Hadoop and HDFS are present. Some big data tools and runtime stacks, which do not assume Hadoop, can work directly with Parquet files.

Recently I was tasked with being able to generate Parquet formatted data files into a regular file system and so set out to find example code of how to go about writing Parquet files. Most examples I came up with did so in the context of Hadoop HDFS. I found this one ParquetReaderWriterWithAvro, that alluded to the possibility of creating Parquet as a regular file, but tended to be shy of some of the crucial specifics.

Nonetheless, I went on and worked with the examples I found and figured out the details of how to make it all work - round trip write data into a Parquet regular file and then read it back.

I noticed that others had an interest in this as well and so decided to clean up my test bed project a bit, make it open source under the MIT license, and put it on public GitHub:

avro2parquet - Example program that writes Parquet formatted data to plain files (i.e., not Hadoop HDFS); Parquet is a columnar storage format.

Here, in this Maven-built Java 8 project, you can see all the details that are necessary to make this work out of the box. For instance, I have figured out the necessary pom file dependencies that work with the latest release of the Parquet libraries: parquet-hadoop and parquet-avro. The README file will mention a few other gotchas, such as needing to define the environment variable HADOOP_HOME.

The crucial information, though, is how to implement one's own versions of org.apache.parquet.io.OutputFile and org.apache.parquet.io.PositionOutputStream for writing to a Parquet output stream and org.apache.parquet.io.InputFile and org.apache.parquet.io.SeekableInputStream for reading from a Parquet stream. The builder for org.apache.parquet.avro.AvroParquetWriter accepts an OutputFile instance whereas the builder for org.apache.parquet.avro.AvroParquetReader accepts an InputFile instance.

This example illustrates writing Avro format data to Parquet. Avro is a row or record-oriented serialization protocol (that is, not columnar-oriented). The nice thing about Avro is that its schema for objects can be composed dynamically at runtime if need be. It should be fairly straightforward to put a JSON object, or CSV row, into an Avro representation and then write it out via the AvroParquetWriter. As they say, that is an exercise left for the reader.

file IO Big data hadoop

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • The Quest for REST
  • The Future of Cloud Engineering Evolves
  • Better Performance and Security by Monitoring Logs, Metrics, and More
  • Kubernetes vs Docker: Differences Explained

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: