Over a million developers have joined DZone.

Three Methods to Automatically Validate PDF Data

Want to validate data from a PDF? Sure you do. See three different ways you can handle it, each with their own benefits.

· Java Zone

Navigate the Maze of the End-User Experience and pick up this APM Essential guide, brought to you in partnership with CA Technologies

An insurance customer delivery team wanted to (as part of regression testing) automate the validation of data present in PDF documents. After going through the requirements, we explored multiple options and suggested three solutions, each with its own set of unique features. Two of the options involve a two-step process, where the first step converts the PDF document into a text document, while text is extracted in the second step. In this article, we elaborate on the problem and also share an overview of each option.


Recently, we were in a discussion with a project delivery team that was dealing with PDF documents. The delivery team works for an insurance customer, where one of their activities is to generate customer policies as PDF documents. As a standard process, the PDF documents generated are verified for content and structure and then sent to the customer. After each functionality change, the team needs to perform a regression test using various data sets and multiple templates. Today, the team has to go through each generated PDF document and validate information like name, address, policy number, policy start date, and the like, manually. As the number of tests is expected to grow along with the number of PDF templates, the team wanted a solution that would reduce the manual efforts involved and work across a large volume of documents.

At first glance, the task of locating data inside a PDF seems to be straight-forward. But, this task is not as simple as it appears to be. PDF is a display format and data stored in the PDF may not be in the same order in which it is displayed on screen or as it appears on a printed page. This is because text and/or images in a PDF are placed using page coordinates and do not have a linear structure (like in a text file) or a hierarchical structure (like in an XML file) that we are commonly accustomed to in other formats. In this respect, the PDF is like an HTML document, which specifies how the data is to be displayed visually, rather than using a well-defined structure that in turn helps decide the layout of the data. For example, when a PDF (as shown in Figure 1) is converted to a text file (as shown in Figure 2), paragraphs that are placed next to each other (visually), may be separated by many other paragraphs (in the converted text file).

Image title

Figure 1: Sample PDF

Image title

Figure 2: PDF converted to Text

After considering the requirements of the delivery team for an easy, scalable and automatable process, we explored various options and solutions. We came up with three viable methods that can address the needs of the team. In the following sections, we describe each of the methods. It is important to note that two of these methods work on a text file, which is generated from the PDF document. For the scope of this article, the text file generated from the PDF document is known as the ‘extract file’.

Method 1: Extracting Text Using Coordinates

The most commonly mentioned technique for extracting text from a PDF document is by using the PDFTextStripperByArea method provided by the Apache PDFBox library. To use this method, we need to specify a rectangular area (using its coordinates), which when placed on the PDF page, defines the area from which PDFBox will extract text. A Java sample using this method is shown in Table 1.

PDFTextStripperByArea stripper = new PDFTextStripperByArea();
Rectangle rect = new Rectangle(10, 280, 275, 60); //coordinates of region
stripper.addRegion("r1", rect);
List allPages = document.getDocumentCatalog().getAllPages();
PDPage firstPage = (PDPage)allPages.get(0);
System.out.println("Rectangle dimensions:" + rect);
System.out.println("Text: " + stripper.getTextForRegion("r1"));

Table 1: Sample for PDFTextStripperByArea

One of the difficulties of using this coordinate based extraction method is that we need to define each rectangular are from where we wish to extract text. For long documents, this task can be very time consuming as well as error-prone. The method of manually specifying the coordinates is error-prone as we need to guess the positions as well as the size of the rectangles needed. In most cases, this becomes an exercise in trial-and-error that needs multiple iterations.

To help make the task of specifying coordinates easy, we developed a helper application, namely PDFVisualMapper (shown in Figure 3), that loads one PDF page as an image and allow us to specify the rectangles using rubber-banding technique (click and drag the mouse to define outline). The application generates, as output, the coordinates of the rectangles which can be used in the PDFTextStripperByArea to extract text, as shown in Figure 4.

Image title

Figure 3: PDFVisualMapper showing three rectangular areas from which text is to be extracted for validation

Image title

Figure 4: Output generated by PDFVisualFieldMapper, with field names and their coordinates

Using coordinates to define an area and extract text from them is a fairly simple method that works as long as the position of the text elements does not undergo a change. For example, if we define an area to extract two lines of text (say, business address), it will fail to extract the complete address if the address spans three lines (the third line of the address will not be extracted). Similarly, if we have defined other elements for extraction based on the assumption that the address line will be spanning two lines of text, it may happen that the data may get shifted down the page if the address line spans more than two lines.

While this method is the simplest of the three methods, its biggest limitation is the fixed nature of the coordinates used for extraction. Any change to the position of the data can result in incorrect extraction.

Method 2: Finding Known Values

Method two validates text present in the extract file PDF document and searches for known values. To use this method, we need to specify the data we are searching for. The solution will search for the specified text (‘master text’) in the extract file.In this method, we need to create an input file that contains text that we wish to search for in the extract file. The application searches for master text and the PDF document is declared to be valid if all the required master text elements are found.

To validate a document, we need to create an input file as shown in Table 2. To account for multiple occurrences of the master text (for example policy number can appear in multiple places), it is possible to specify that master text being searched for is after specified prefix text or between specified prefix text and suffix text or before specified suffix text.

Field Name

Master Text



Policy Number:


Premium Policy


Premium Policy

Expiration Date

10 Apr 2016

Table 2: Input file

The application will search for the master text one entry at a time and generate output as shown in Table 3. If the master text is found, the entry is marked ‘FOUND’. If the master text is not found, that entry is marked as ‘NOT FOUND’. It is important to note that the solution will stop at the first occurrence of the master text entry and will not search for all occurrences of the master text entry.

Field Name


Master Text

Policy Number:



Premium Policy



Expiration Date


10 Apr 2016

Table 3: Output generated by solution

Method 3: Rule-Based Extraction

Method three is a bit involved and complex as compared to the other methods described in earlier sections. We have named this method ‘rule-based extraction.’ Similar to method 1, this method extracts text from the extract file and generates an output file. The contents of the output file are validated independently with the master data. When compared to method 1, in this method, we define rules that allow us to navigate and extract data from the extract file.To accommodate for various formats, multiple rules are supported by the solution. As an example, for the sample document shown in Figure 2, we can define the rule file as shown in Table 4 (for convenience, the JSON format is used):

    "rules": [
        { "command": "skipTill", "terminatingText": "Policy Number" },
        { "command": "extractText", "fieldName": "policyNumber" },
        { "command": "skipTillEnd" }

Table 4: Rules

By defining various text operations as rules, this method is more flexible that the other methods. Due to parameterization, not only are the rules flexible, the solution itself is expandable as new rules can easily be added to cater to specific needs of a team. As the solution uses dynamic class loading principles of the Java language, adding new rules to the solution is as simple as creating a Java archive (jar file) containing the new rules and adding them to the CLASSPATH of the solution.

While this method very flexible, it depends on well-known text elements (‘markers’ as we call them) in the extract file, for the solution to identify its position in the file and extract data accordingly. If the position of these markers changes, either due to a change in format or due to usage of a different PDF to text conversion solution, the rules file will have to be updated to account for the changes.

Comparing the Methods

Of the three methods described, the most logical question to ask is, ‘Which of these methods is the best?’ Sadly, there is no simple answer. It depends on the input PDF documents from which we wish to extract data. If we wish to extract known data from the PDF, then method two is preferred. For example, if we know the policy number that needs to be found, we can specify the policy number as master text and the solution will be able to find that text in the extract file. If we do not know the exact value of the data we are looking for, method one and method three are preferred. Additionally, if we are guaranteed that the layout of the data in the PDF does not undergo a change, method one is preferable over method three. If we wish to extract data from a document and also ensure that the structure of the document is appropriate, method three is preferred as we can define rules that allow for such validation in addition to data extraction. The biggest difference between method one as compared to method two and method three is method one can operate directly on the PDF document, whereas methods two and three need the PDF document to be converted into an equivalent text file.


Today, each business is focusing on increasing effectiveness of each process using automation. So, how does each method stack up against automation? We are happy to note that each of these methods can be included in an automation workflow. For each method, the initial manual effort is needed to either define the rectangular areas for extraction (method one) or define the master file (method two) or define rules for extraction (method three). After creating the input files, we can apply the solution to multiple instances of the same template. Thus, each of the methods is scalable across multiple instances of one template as well as flexible to adapt to multiple templates without needing any code changes.


The task of converting a PDF document to a text document is fairly easy using tools like Apache PDFBox (as well as XPDF). But, after the conversion, extracting required data from the converted text is a challenge. This is because text is not arranged in a well-defined format inside a PDF. There is no guarantee that text will appear in the same structure when extracted from the PDF (though text on one page does appear on the same page). In this article, we have presented an overview of three methods that we have developed to address the problem of extract data from the PDF and validating the same against provided master data. Each of the methods has its own advantages and its own set of limitations. The choice of the method depends on the PDF document itself, as well as the way in which data from the PDF has to be processed.

Thrive in the application economy with an APM model that is strategic. Be E.P.I.C. with CA APM.  Brought to you in partnership with CA Technologies.

automation ,java ,pdf in java ,data extraction

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}