How to Create DSL With ANTLR4 and Scala

DZone 's Guide to

How to Create DSL With ANTLR4 and Scala

Want to learn more about how to create a basic grammar in Scala? Check out this post on how to create DSL with ANTLR4 and Scala.

· Java Zone ·
Free Resource

Domain-specific languages — when done correctly — help a lot with improving developer productivity. The first thing that you need to do while creating a DSL is to create a parser. This parser can take a piece of text and transform it into a structured format (like Abstract Syntax Tree) so that your program can understand and do something useful with it. DSL tends to stay for years. While choosing a tool for creating the parser for your DSL, you need to make sure that it's easy to maintain and evolve the language. For parsing simple DSL, you can just use a regular expression or Scala’s in-built parser-combinators. But, for even slightly complex DSL, both of these become a performance and maintenance nightmare.

In this post, we will see how to use ANTLR to create a basic grammar with ANTLR and use it in Scala. Full code and grammar for this post can be found here.


ANTLR can generate lexers, parsers, tree parsers, and combined lexer-parsers. Parsers can automatically generate abstract syntax trees that can be further processed with tree parsers. ANTLR provides a single consistent notation for specifying lexers, parsers, and tree parsers. This is in contrast with other parser/lexer generators and adds greatly to the tool’s ease of use. It supports:

  1. Tree construction
  2. Tree walking
  3. Error recovery
  4. Error handling
  5. Translation

Antlr supports a large number of target languages, so the same grammar can be used for both backend parsing or frontend validations. The following languages are supported:

Ada, Action Script, C. C#, D, Emacs, ELisp, Objective C, Java, Javascript,

Python, Ruby, Perl6, Perl, PHP, Oberon, Scala.

How ANTLR works

On a high level, here’s what you do to parse something with ANTLR:

  1. Create lexer rules
  2. Create parser rules that use a lexer output
  3. Use lexer and parser to generate source code for a target language
  4. Use generated sources to convert some raw input into the structured form (AST)
  5. Do something with this structured data

We will begin to understand it with the help of an example. Let's say we want to create a DSL for the following arithmetic operation. A valid input (expression) will be:

3 + (4 * 5)

As humans, if we want to evaluate this expression, here’s what we will do:

  • Split this expression into different components:
    • In the above example, each character belongs to one of these group
      1. Operands (3, 4, 5)
      2. Operation (+ - * /)
      3. Whitespaces
    • This part is called the lexical analysis, where you convert raw text (stream of characters) into tokens.
  • Create a relationship between tokens
    • To evaluate it efficiently, we can create a tree-like structure to define the relationship between a different expression like this:
    • This is called the AST (Abstract Syntax Tree). This occurs by applying rules that you define in your grammar on the input text. Once you have the AST, to evaluate the expression, we need to traverse or ‘walk’ it in a depth-first manner. We start at the root ‘+’ and go as deep into the tree as we can along each child. Then, we evaluate the operations as we come back out of the tree.

We will now set up the tools and try creating a simple grammar.



ANTLR provides a GUI-based IDE for developing grammar. You can download it here. It combines an excellent grammar-aware editor with an interpreter for rapid prototyping and a language-agnostic debugger for isolating grammar errors.

IntelliJ Plugin

IntelliJ provides a plugin for ANTLR. Refer to this link for how to add the plugin

Command Line Setup

You can directly create, test, and debug grammar from the command line, too. Here are the steps:

  1. Download the ANTLR jar
  2. Create an alias for the command to generate sources:
    • alias antlr4='java -jar /home/sam/softwares/antlr/antlr-4.6-complete.jar'
  3. Create an alias to test your grammar for some input:
    • alias grun='java -cp ".:/home/sam/softwares/antlr/antlr-4.6-complete.jar" org.antlr.v4.gui.TestRig'

Add this in ~/.bashrc to be able to directly call antlr4 and grun command from anywhere.

Creating Grammar

A grammar will consist of two parts:

  • Lexer
  • Parser

Both of these can be defined in the same file, but for maintenance sake, it's better to define it in separate files. Let's create a lexer and parser for a DSL, which will allow basic arithmetic operations on two numbers. Some valid inputs will be:

127.1 + 2717
2674 - 4735
47 * 74.1
271 / 281
10 + 2 

Lets first define the  lexer definitions in a file named ArithmeticLexer.g4 to extract tokens from the input:
lexer grammar ArithmeticLexer;

WS: [ \t\n]+ -> skip ;

NUMBER: ('0' .. '9') + ('.' ('0' .. '9') +)?;

ADD: '+';
SUB: '-';
MUL: '*';
DIV: '/';

  • The definition for WS is telling to skip all the space, tabs and newline chars
  • Secondly, the definition for NUMBER is telling to extract all numbers as NUMBER token
  • ADD/SUB/MUL/DIV definition is assigning a named token to respective mathematical operator

Now, let's write some parser rules in the file are named ArithmeticParser.g4, which will process tokens that are generated by the lexer and create an AST for a valid input.

parser grammar ArithmeticParser;

options { tokenVocab=ArithmeticLexer; }

expr: NUMBER operation NUMBER;

operation: (ADD | SUB | MUL | DIV);

  • expr is the base rule and will accept any 2 numbers with one of valid operation.
  • operation rule is telling tokens are valid operations

Generating Sources for a Target Language

Now that we have our grammar, we can generate the lexer and parser source in any of the supported languages. Run the following from the command line:

antlr4 ArithmeticParser.g4
antlr4 ArithmeticLexer.g4

By default, it will generate the sources in Java. You can change that by passing the language argument. For example, the following command generates sources in JavaScript:

antlr4 -Dlanguage=JavaScript ArithmeticParser.g4

Instead of generating sources individually for lexer and parser, you can do in the same command too:

antlr4 *.g4

After you run the code above, you will see the following Java source generated in the same directory:

├── ArithmeticLexer.g4
├── ArithmeticLexer.java
├── ArithmeticLexer.tokens
├── ArithmeticParserBaseListener.java
├── ArithmeticParser.g4
├── ArithmeticParser.java
├── ArithmeticParserListener.java
└── ArithmeticParser.tokens

You can also provide a package name for generated sources, which can be seen below:

antlr4 -package arithmetic *.g4

ANTLR provides two ways to walk the AST - Listener and Vistor. Antlr doesn’t generate sources for the visitor by default. Since we will be using visitor pattern while using it in Scala to avoid mutability, let's generate a visitor source too. It can be done by providing the visitor flag, like below:

antlr4 -visitor *.g4

Now, you will see the source for the visitor too:

├── ArithmeticLexer.g4
├── ArithmeticLexer.java
├── ArithmeticLexer.tokens
├── ArithmeticParserBaseListener.java
├── ArithmeticParserBaseVisitor.java
├── ArithmeticParser.g4
├── ArithmeticParser.java
├── ArithmeticParserListener.java
├── ArithmeticParser.tokens
└── ArithmeticParserVisitor.java

Next, compile the sources:

javac -cp ".:/home/sam/softwares/antlr/antlr-4.6-complete.jar" *.java

Now, you are ready to test any input against your DSL.

Testing the DSL

To do that, run the following command:

grun Arithmetic expr -tokens

The above command is saying to execute org.antlr.v4.gui.TestRig on the Arithmetic grammar and test for a rule named expr. Then, the -tokens flag will allow us to see the tokens generated by the lexer.

Next, enter any valid input, like 10 + 3. Then, press enter. Afterwards, you can press Ctrl+D. You will see an input like the following:

$ grun Arithmetic expr -tokens
10 + 2

Since it didn’t show any error, it means that your input is invalid. Each line is showing a token value, token name, and its start and end offset.

In case of an invalid input, ANTLR will tell you what was it was expecting. This can be seen below:

$ grun Arithmetic expr -tokens

line 2:0 missing NUMBER at '<EOF>'

In case of a valid input, in addition to tokens, you can also see the AST by passing the -gui flag, as shown below:

$ grun Arithmetic expr -tokens -gui

1272.12 * 743.12

Using Generated Sources in Code

We will now see how to extend generated interfaces and use it from within the code. As I mentioned above, ANTLR provides two ways to walk the AST - Visitor and Listener. We will first see how to use the listener pattern. Although the listener method is commonly used by Java devs, Scala folks will not like it, because it can only return one unit. Hence, you will need to use intermediate variables, leading to side-effects. Refer to this post for a comparison between the two patterns. You can check out the complete code here.

The first thing that you need to do is add an ANTLR dependency:

libraryDependencies ++= Seq(
  "org.antlr" % "antlr4-runtime" % "4.6",
  "org.antlr" % "stringtemplate" % "3.2"

Next, you need to import all the generated source in your project and create a parse method that will accept an input expression:

def parse(input:String) = {
  println("\nEvaluating expression " + input)

  val charStream = new ANTLRInputStream(input)
  val lexer = new ArithmeticLexer(charStream)
  val tokens = new CommonTokenStream(lexer)
  val parser = new ArithmeticParser(tokens)

  /* Implement listener and use parser */

  • In line 4, we converted the input text to a char stream, because lexer operates at char level.
  • In line 5, we get a lexer object that uses ArithmeticLexer, generated using definitions from ‘ArithmeticLexer.g4’ and pass input stream to it.
  • In line 6, we got all the token obtained by applying lexer rules to the input text.
  • In line 7, we created a parser by applying rules that we defined in ArithmeticParser.g4.

The next thing that we need to do is implement some methods in the BaseListener interface. Let's see the contents of the generated ArithmeticParserBaseListener :

public class ArithmeticParserBaseListener implements ArithmeticParserListener {
  //enter and exit methods for grammar rules
  @Override public void enterExpr(ArithmeticParser.ExprContext ctx) { }
  @Override public void exitExpr(ArithmeticParser.ExprContext ctx) { }
  @Override public void enterOperation(ArithmeticParser.OperationContext ctx) { }
  @Override public void exitOperation(ArithmeticParser.OperationContext ctx) { }

  //default grammar independent methods
  @Override public void enterEveryRule(ParserRuleContext ctx) { }
  @Override public void exitEveryRule(ParserRuleContext ctx) { }
  @Override public void visitTerminal(TerminalNode node) { }
  @Override public void visitErrorNode(ErrorNode node) { }

For every rule that we defined in ArithmeticParser.g4, it created a enter and exit method. Since we had two rules, expr and operation, it created four methods. As the name implies, these will be triggered every time a walker enters and exits a matched rule. For now, let's focus on the entry method of our starting rule, expr. This problem can be solved by using the visitor instead of the listener, as discussed earlier in this post.

@Override public void enterExpr(ArithmeticParser.ExprContext ctx) { }

Notice that every rule has a context that has all the meta information, as well as the matched input info. Also, please note that all methods return void, which means that you need to use mutable variables to store computational values, if they need to be shared among different rules or even by the main caller.

Now, we will create our own class by extending the ArithmeticParserBaseListener and implementing the enterExpr rule.

class ArithmeticListenerApp extends ArithmeticParserBaseListener {

  override def enterExpr(ctx: ArithmeticParser.ExprContext): Unit = {
    val exprText = ctx.getText
    println(s"Expression after tokenization = $exprText")

    val operands = ctx.NUMBER().toList.map(_.getText)
    val operand1 = parseDouble(operands.lift(0).getOrElse("0.0")).getOrElse(0.0)
    val operand2 = parseDouble(operands.lift(1).getOrElse("0.0")).getOrElse(0.0)

    val operation = ctx.operation().getText

    calculate(operand1, operand2, operation) match {
      case Some(result) =>
      println(s"Result of $operand1 $operation $operand2 = $result")
      case None =>
      println(s"Failed to evaluate expression. Tokenized expr = $exprText")


  def parseDouble(s: String): Option[Double] = Try(s.toDouble).toOption

  def calculate(op1:Double, op2:Double, operation:String):Option[Double] = {
    operation match {
      case "+" => Some(op1 + op2)
      case "-" => Some(op1 - op2)
      case "*" => Some(op1 * op2)
      case "/" => Try(op1 / op2).toOption

      case _ =>
        println(s"Unsupported operation")


  • In line 4, exprText will have tokenized text for this rule.
  • In line 7, expr rule’s context knows about NUMBER and operation. Since NUMBER occurs twice in the rule, the ctx.NUMBER() will be a list containing two numbers.
  • In line 11, we get a value of the operation from the expr rule’s context
  • We calculate the value, and since there is no way to return it to the caller from the enterExpr method, we just print it. We could have stored it in some mutable variable, in case the caller needed it.

Now that we have implemented this, we need to use it in the parse method that we defined earlier, as shown below:

def parse(input:String) = {
  println("\nEvaluating expression " + input)

  val charStream = new ANTLRInputStream(input)
  val lexer = new ArithmeticLexer(charStream)
  val tokens = new CommonTokenStream(lexer)
  val parser = new ArithmeticParser(tokens)

  val arithmeticListener = new ArithmeticListenerApp()


Now, let's test it on some input expressions:

val expressions = List(
  "127.1 + 2717",
  "2674 - 4735",
  "47 * 74.1",
  "271 / 281",
  "12 ^ 3" // unsupported expression

You will see following output:

Evaluating expression 127.1 + 2717
Expression after tokenization = 127.1+2717
Result of 127.1 + 2717.0 = 2844.1

Evaluating expression 2674 - 4735
Expression after tokenization = 2674-4735
Result of 2674.0 - 4735.0 = -2061.0

Evaluating expression 47 * 74.1
Expression after tokenization = 47*74.1
Result of 47.0 * 74.1 = 3482.7

Evaluating expression 271 / 281
Expression after tokenization = 271/281
Result of 271.0 / 281.0 = 0.9644128113879004

Evaluating expression 12 ^ 3
line 1:3 token recognition error at: '^'
line 1:5 missing {'+', '-', '*', '/'} at '3'
Expression after tokenization = 123
Unsupported operation
Failed to evaluate expression. Tokenized expr = 123

I hope this post gave you an idea on how to get started creating your own DSL. Check out part two of this post to learn more about listener and visitors in ANTLR.

antlr ,domain specific language ,dsl ,scala

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}