DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Building Call Graphs for Code Exploration Using Tree-Sitter
  • Implementing PEG in Java
  • How to Merge HTML Documents in Java
  • The Future of Java and AI: Coding in 2025

Trending

  • Infrastructure as Code (IaC) Beyond the Basics
  • Integrating Model Context Protocol (MCP) With Microsoft Copilot Studio AI Agents
  • Metrics at a Glance for Production Clusters
  • Data Quality: A Novel Perspective for 2025
  1. DZone
  2. Coding
  3. Languages
  4. Parsing in Java (Part 1): Structures, Trees, and Rules

Parsing in Java (Part 1): Structures, Trees, and Rules

In Part 1 in this comprehensive series on parsers (with a focus on Java), we examine how parsers work, the difference between Parse Trees and ASTs, and more.

By 
Gabriele Tomassetti user avatar
Gabriele Tomassetti
·
Updated Jun. 04, 17 · Tutorial
Likes (52)
Comment
Save
Tweet
Share
46.3K Views

Join the DZone community and get the full member experience.

Join For Free

If you need to parse a language, or document, from Java there are fundamentally three ways to solve the problem:

  • Use an existing library supporting that specific language: for example a library to parse XML.
  • Building your own custom parser by hand.
  • A tool or library to generate a parser: for example ANTLR, which you can use to build parsers for any language.

Use an Existing Library

The first option is the best for well-known and supported languages, like XML or HTML. A good library usually also includes an API to programmatically build and modify documents in that language. This is typically more of what you get from a basic parser. The problem is that such libraries are not so common and they support only the most common languages. In other cases, you are out of luck.

Building Your Own Custom Parser by Hand

You could need to go for the second option if you have particular needs. Both in the sense that the language you need to parse cannot be parsed with traditional parser generators, or you have specific requirements that you cannot satisfy using a typical parser generator. For instance, because you need the best possible performance or a deep integration between different components.

A Tool or Library to Generate a Parser

In all other cases, the third option should be the default one, because it is the one that is most flexible and has the shorter development time. That is why, in this article, we concentrate on the tools and libraries that correspond to this option.

Note: Text in blockquotes describing a program comes from the respective documentation.

Tools To Create Parsers

We are going to see:

  • Tools that can generate parsers usable from Java (and possibly from other languages)
  • Java libraries to build parsers

Tools that can be used to generate the code for a parser are called parser generators or compiler-compilers. Libraries that create parsers are known as parser combinators.

Parser generators (or parser combinators) are not trivial: You need some time to learn how to use them, and not all types of parser generators are suitable for all kinds of languages. That is why we have prepared a list of the best-known of them, with a short introduction for each of them. We are also concentrating on one target language: Java. This also means that (usually) the parser itself will be written in Java.

To list all possible tools and libraries parser for all languages would be kind of interesting, but not that useful. That is because there would be simply too many options, and we would all get lost in them. By concentrating on one programming language, we can provide an apples-to-apples comparison and help you choose one option for your project.

Useful Things to Know About Parsers

To make sure that this list is accessible to all programmers, we have prepared a short explanation of terms and concepts that you may encounter searching for a parser. We are not trying to give you formal explanations, but practical ones.

Structure of a Parser

A parser is usually composed of two parts: a lexer, also known as scanner or tokenizer, and the proper parser. Not all parsers adopt this two-step schema: Some parsers do not depend on a lexer. They are called scannerless parsers.

A lexer and a parser work in sequence: The lexer scans the input and produces the matching tokens, the parser scans the tokens and produces the parsing result.

Let’s look at the following example and imagine that we are trying to parse a mathematical operation.

437 + 734


The lexer scans the text and finds ‘4’, ‘3’, ‘7’ and then the space. The job of the lexer is to recognize that the first characters constitute one token of type NUM. Then the lexer finds a ‘+’ symbol, which corresponds to a second token of type PLUS, and lastly, it finds another token of type NUM.The input stream is transformed in Tokens and then in an AST by the parser

The parser will typically combine the tokens produced by the lexer and group them.

The definitions used by lexers or parser are called rules or productions. A lexer rule will specify that a sequence of digits correspond to a token of type NUM, while a parser rule will specify that a sequence of tokens of type NUM, PLUS, NUM corresponds to an expression.

Scannerless parsers are different because they process directly the original text, instead of processing a list of tokens produced by a lexer.

It is now typical to find suites that can generate both a lexer and parser. In the past, it was instead more common to combine two different tools: One to produce the lexer and one to produce the parser. This was, for example, the case of the venerable lex & yacc couple: lex produced the lexer, while yacc produced the parser.

Parse Tree and Abstract Syntax Tree

There are two terms that are related and sometimes they are used interchangeably: parse tree and Abstract SyntaxTree (AST).

Conceptually they are very similar:

  • They are both trees: There is a root representing the whole piece of code parsed. Then there are smaller subtrees representing portions of code that become smaller until single tokens appear in the tree
  • The difference is the level of abstraction: The parse tree contains all the tokens that appeared in the program and possibly a set of intermediate rules. The AST instead is a polished version of the parse tree where the information that could be derived or is not important to understand the piece of code is removed

In the AST, some information is lost. For instance, comments and grouping symbols (parentheses) are not represented. Things like comments are superfluous for a program and grouping symbols are implicitly defined by the structure of the tree.

A parse tree is a representation of the code closer to the concrete syntax. It shows many details of the implementation of the parser. For instance, usually rules correspond to the type of a node. They are usually transformed in AST by the user, with some help from the parser generator.

A graphical representation of an AST looks like this.

An abstract syntax tree for the Euclidean algorithmSometimes you may want to start producing a parse tree and then derive from it an AST. This can make sense because the parse tree is easier to produce for the parser (it is a direct representation of the parsing process) but the AST is simpler and easier to process via the following steps (and by the following steps, we mean all the operations that you may want to perform on the tree): code validation, interpretation, compilation, etc..

Grammar

A grammar is a formal description of a language that can be used to recognize its structure.

In simple terms, a grammar is a list of rules that define how each construct can be composed. For example, a rule for an if statement could specify that it must starts with the “if” keyword, followed by a left parenthesis, an expression, a right parenthesis, and a statement.

A rule could reference other rules or token types. In the example of the if statement, the keyword “if”, the left, and the right parenthesis were token types, while the expression and statement were references to other rules.

The most used format to describe grammars is the Backus-Naur Form (BNF), which also has many variants, including the Extended Backus-Naur Form. The Extented variant has the advantage of including a simple way to denote repetitions. A typical rule in a Backus-Naur grammar looks like this:

<symbol> ::= __expression__


The <symbol> is usually nonterminal, which means that it can be replaced by the group of elements on the right, __expression__. The element __expression__ could contain other nonterminal symbols or terminal ones. Terminal symbols are simply the ones that do not appear as a <symbol> anywhere in the grammar. A typical example of a terminal symbol is a string of characters, like “class”.

Left-Recursive Rules

In the context of parsers, an important feature is support for left-recursive rules. This means that a rule could start with a reference to itself. This reference could be also indirect.

Consider for example arithmetic operations. An addition could be described as two expression(s) separated by the plus (+) symbol, but an expression could also contain other additions.

addition       ::= expression '+' expression
multiplication ::= expression '*' expression
// an expression could be an addition or a multiplication or a number
expression     ::= addition | multiplication |// a number


This description also matches multiple additions, like 5 + 4 +  3. That is because it can be interpreted as expression (5) (‘+’) expression(4+3). And then 4 + 3 itself can be divided into its two components.

The problem is that these kinds of rules may not be used with some parser generators. The alternative is a long chain of expressions that takes care also of the precedence of operators.

Some parser generators support direct left-recursive rules, but not an indirect one.

Types of Languages and Grammars

We care mostly about two types of languages that can be parsed with a parser generator: regular languages and context-free languages. We could give you the formal definition according to the Chomsky hierarchy of languages, but it would not be that useful. Let’s look at some practical aspects instead.

A regular language can be defined by a series of regular expressions, while a context-free one needs something more. A simple rule of thumb is that if a grammar of a language has recursive elements it is not a regular language. For instance, as we said elsewhere, HTML is not a regular language. In fact, most programming languages are context-free languages.

Usually, there are regular grammars and context-free grammars that correspond respectively to regular and context-free languages. But to complicate matters, there is a relatively new (created in 2004) kind of grammar, called Parsing Expression Grammar (PEG). These grammars are as powerful as Context-free grammars, but according to their authors, they more naturally describe programming languages.

The Differences Between PEG and CFG

The main difference between PEG and CFG is that the ordering of choices is meaningful in PEG, but not in CFG. If there are many possible valid ways to parse an input, a CFG will be ambiguous and thus wrong. Instead, with PEG, the first applicable choice will be chosen, and this automatically solves some ambiguities.

Another difference is that PEG uses scannerless parsers: They do not need a separate lexer or lexical analysis phase.

Traditionally, both PEG and some CFGs have been unable to deal with left-recursive rules, but some tools have found workarounds for this — either by modifying the basic parsing algorithm, or by having the tool automatically rewrite a left-recursive rule in a nonrecursive way. Either of these ways has downsides: Wither by making the generated parser less intelligible or by worsening its performance. However, in practical terms, the advantages of easier and quicker development outweigh the drawbacks.

Stay Tuned

That's all for Part 1, but stay close. Coming up, we'll delve into parser generators, their workflows, the various types, and some examples of them in action.

Parser (programming language) Tree (data structure) Java (programming language)

Published at DZone with permission of Gabriele Tomassetti, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Building Call Graphs for Code Exploration Using Tree-Sitter
  • Implementing PEG in Java
  • How to Merge HTML Documents in Java
  • The Future of Java and AI: Coding in 2025

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!