Over a million developers have joined DZone.

Apache Tika and Apache OpenNLP for Easy PDF Parsing and Munching

DZone's Guide to

Apache Tika and Apache OpenNLP for Easy PDF Parsing and Munching

Learn how to parse PDFs with ease — and without any code — using the updated Apache Tika and Apache OpenNLP processors for Apache 1.5.

· Big Data Zone ·
Free Resource

Access NoSQL and Big Data through SQL using standard drivers (ODBC, JDBC, ADO.NET). Free Download 

I have just started working on updated Apache Tika and Apache OpenNLP processors for Apache 1.5 and while testing, I found an interesting workflow that I would like to share.

I am using a few of my processors in this flow:

Here is the flow that I was working on:

  • Load some PDFs.

  • Use the built-in Apache Tika processor to extract metadata from the files.

  • Pull out the text using your Apache Tika processor.

  • Split this into individual lines.

  • Extract out the text of the line into an attribute (i.e. (^.*$)) into a sentence.

  • Run NLP to analyze names and locations on that sentence.

  • Run Stanford CoreNLP sentiment analysis on the sentence.

  • Run your attribute cleaner to turn those attributes into AVRO safe names.

  • Turn all the attributes into a JSON Flow File.

  • Infer an Avro schema (I only needed this once; then, I'll remove it).

  • Set the name of the Schema to be looked up from the Schema Registry.

  • Run QueryRecord to route positive, neutral, and negative sentiments to different places. Example SQL: SELECT * FROM FLOWFILE WHERE sentiment = 'NEGATIVE'. Thanks, Apache Calcite! We also convert from JSON to Avro to send to Kafka, as well as for easy conversion to Apache ORC for Apache Hive usage.

  • Send records to Kafka 1.0. Some are merged to store as files and some are made into Slack messages.

  • Done!

Here is an example of my generated JSON file.

Here are some of the attributes after the run.

You can see the queries in the QueryRecord processor.

The results of a run showing a sentence, file metadata, and sentiment.

We are now waiting for new PDFs (and other file types) to arrive in the directory for immediate processing.

I have a JSONTreeReader, a Hortonworks Schema Registry, and AvroRecordSetWriter.

We set the properties and the schema registry for the reader and writer. Obviously, we can use other readers and writers as needed for types like CSV.

When I'm done, since it's Apache NiFi 1.5, I commit my changes for versioning.

For the upcoming processor, I will be interfacing with OCR, NER, NLTK, Grobid quantities, MITIE, age detection, vision, deep learning, and image captioning with Apache Tika.

Apache Tika has added some really cool updates, so I can't wait to dive in.

The fastest databases need the fastest drivers - learn how you can leverage CData Drivers for high performance NoSQL & Big Data Access.

big data ,tika ,tutorial ,parsing ,munging ,data analytics ,opennlp

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}