DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Spark Job Optimization
  • All You Need to Know About Apache Spark
  • Iceberg Catalogs: A Guide for Data Engineers
  • Data Processing With Python: Choosing Between MPI and Spark

Trending

  • Enforcing Architecture With ArchUnit in Java
  • GitHub Copilot's New AI Coding Agent Saves Developers Time – And Requires Their Oversight
  • MCP Servers: The Technical Debt That Is Coming
  • The Future of Java and AI: Coding in 2025
  1. DZone
  2. Coding
  3. Languages
  4. On Some Aspects of Big Data Processing in Apache Spark, Part 4: Versatile JSON and YAML Parsers

On Some Aspects of Big Data Processing in Apache Spark, Part 4: Versatile JSON and YAML Parsers

In this post, I present versatile JSON and YAML parsers for a Spark application.

By 
Alexander Eleseev user avatar
Alexander Eleseev
DZone Core CORE ·
Aug. 23, 22 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
5.9K Views

Join the DZone community and get the full member experience.

Join For Free

In my previous post, I presented design patterns to program Spark applications in a modular, maintainable, and serializable way—this time I demonstrate how to configure versatile JSON and YAML parsers to be used in Spark applications. 

A Spark application typically needs to ingest JSON data to transform the data, and then save the data in a data source. On the other hand, YAML data is needed primarily to configure Spark jobs. In both cases, the data needs to be parsed according to a predefined template. In a Java Spark application, these templates are POJOs. How to program a single parser method to process a wide class of such POJO templates with data taken from local or distributed file systems?

This post is organized as follows:

  • Section 1 demonstrates how to configure a versatile compact JSON parser,
  • Section 2 shows how to do the same for a YAML parser. 

A fully workable code can be found here. Let's go.

JSON Parser.

A Spark application may need to read a JSON file either from a local file system or from a distributed Hadoop file system (hdfs). Typically in the local case, configuration data is uploaded. Another scenario, when local JSON data is uploaded, is to run, test, and debug a Spark application in a local mode. In these cases, it is nice to have a JSON parser, where we can quickly switch between a local file system and a hdfs.

To attack this problem, let's see how data streams differ for a local file system and a hdfs. For a local file system, a DataInputStream is obtained the following way:

Java
 
DataInputStream dataInputStream = new DataInputStream(
        new FileInputStream("src/test/resources/file_name.json"));

On the other hand, a hdfs data stream is obtained in a much more involved way:

Java
 
final SparkSession sparkSession = SparkSession
.builder().config("//--config params--//")
.appName("APP_NAME_1")
//--enable other features--//
.getOrCreate();
SparkContext sparkContext = sparkSession.sparkContext();
final Configuration hadoopConf = sparkContext.hadoopConfiguration();
SerializableConfiguration scfg = new SerializableConfiguration(hadoopConf);
FileSystem fileSystem = FileSystem.get(scfg.value());
FSDataInputStream stream = fileSystem.open(new Path("//--hdfs file path--"));

Firstly, a Spark session is created; the session is configured to use a hdfs and other distributed data sources (Apache Hive). Secondly, after 4 intermediate steps, an instance of a distributed FileSystem is obtained. Finally, a distributed FSDataInputStream is created. 

The local and the distributed file streams are related:

Java
 
public class FSDataInputStream extends DataInputStream implements Seekable, etc

So, the local and the distributed data streams can be used in the same parser with a generic <? extends DataInputStream> input stream type. Let's try the following solution:

Java
 
public class ParserService  {
  public static  <T extends DataInputStream, B> B parseSingleDeclarationStream(T inputStream,
          Class<B> className) throws IOException {
      Gson gson = new GsonBuilder().setDateFormat("yyyy-MM-dd").create();
      JsonReader reader = new JsonReader(new InputStreamReader(inputStream, "UTF-8"));
      B message = gson.fromJson(reader, className);
      reader.close();
      return message;
  }
}

We use the Gson library to parse an input stream. Also, we need to provide a POJO class name for this parser to work. Here is an example of how to use this parser (see the code for details):

Java
 
DataInputStream dataInputStream = new DataInputStream(
                new FileInputStream("src/test/resources/declaration.json"));
        Declaration declaration = ParserService.parseSingleDeclarationStream(dataInputStream, Declaration.class);
        assertEquals(declaration.getDeclaration_id(), "ABCDE12345"); 
        assertEquals(declaration.getDeclaration_details().length,1);     
       assertEquals(declaration.getDeclaration_details()[0].getDetail_1(),10);

Here the Declaration  and DeclarationDetails are simple POJOs:

Java
 
public class Declaration {
    private String declaration_id;
	private Short declaration_version;
	private Short declaration_type;
	private Date declaration_date;
	private DeclarationDetails[] declaration_details;
  //--getters and setters--//
}
public class DeclarationDetails {	
	private int detail_1;
	private int detail_2;
  //--getters and setters--//
}

So, this parser correctly parses child arrays and objects. Notice that Gson also parses date/time strings as long as the strings comply with a single pattern string ("yyyy-MM-dd" in this case). 

YAML Parser

In our project, we use local YAML files to specify Spark jobs. For example, a basic Spark job configuration to transfer data from a Postgres database to a Hive database looks like this:

YAML
 
---
jobName: JOB_1
hiveTable: hive_table_1
rdbTable: rdb_table_1
---
jobName: JOB_2
hiveTable: hive_table_2
rdbTable: rdb_table_2

Here, we need to transfer data from the relational database tables (rdbTables) to the corresponding hive tables (hiveTables). We need to upload and parse this config data to get a List<JobConfig>, where a JobConfig is 

Java
 
public class JobConfig {
		private String jobName;
	    private String hiveTable;	  
	    private String select;
  //--setters and getters--//
}

The following parser solves this problem:

Java
 
public class ParserService  {
  public static <T extends JobConfig> List<T> getLocalFSJobList(String path,
          Class<T> className) throws FileNotFoundException {
      List<T> list = new ArrayList<>();
      DataInputStream dataStream = new DataInputStream(
              new FileInputStream(path));
      YamlDecoder dec = new YamlDecoder(dataStream);
      YamlStream<T> stream = dec.asStreamOfType(className);
      while (stream.hasNext()) {
          T item = stream.next();
          list.add(item);
      }
      return list;
  }
}

Here we use YamlDecoder from jyaml library. We use a local DataInputStream, although a child stream class, like in the previous part, also works. In addition, this parser uses a <T extends JobConfig> output type. Such a more specific output type allows us to add extra parameters to the basic JobConfig class. 

Notice that it is not necessary to extend a JobConfig superclass for this algorithm to work. I added this constraint because usually YAML files are local and specify Spark jobs, so more general types are not necessary.

This parser runs the following way:

Java
 
@Test
	public void testYmlParser() throws FileNotFoundException {
		List<JobConfig> jobs = ParserService.getLocalFSJobList("src/test/resources/jobs.yml", JobConfig.class);
		 assertEquals(jobs.size(),2);
		 assertEquals(jobs.get(0).getJobName(),"JOB_1");
		 assertEquals(jobs.get(0).getSelect(),"rdbTable_1");
		 assertEquals(jobs.get(1).getJobName(),"JOB_2");
		 assertEquals(jobs.get(1).getSelect(),"rdbTable_2");
	}

The parser correctly recognizes multiple jobs in a single file and assembles the jobs into a list. See the code for details.

Conclusions

In this post, I demonstrated how to program versatile JSON and YAML parsers. The parsers can be easily configured for local and distributed file systems. Also, the parsers can accept a wide range of POJO templates to parse data. Hope these tricks will help you in your project

Apache Spark Big data JSON YAML Parser (programming language)

Opinions expressed by DZone contributors are their own.

Related

  • Spark Job Optimization
  • All You Need to Know About Apache Spark
  • Iceberg Catalogs: A Guide for Data Engineers
  • Data Processing With Python: Choosing Between MPI and Spark

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!