Using Spark Listeners
Using Spark Listeners
In this article, we discuss how to use Spark listeners more effectively and provide a framework to better perform ETL with Spark.
Join the DZone community and get the full member experience.Join For Free
In the last quarter of 2019, I developed a meta-data driven, ingestion engine using Spark. The framework /library has multiple patterns to cater to multiple source and destination combinations. For example, two patterns are available for loading flat files to cloud storage (one to load data to AWS S3 and another to load data to Azure Blob).
As data loading philosophies have changed from Extract-Transform-Load (ETL) to Extract-Load-Transform (ETL), such a framework is very useful, as it reduces the time needed to set up ingestion jobs.
An important aspect of any ingestion engine is to know how many records have been read from a given source and written to a destination. Normally, the approach that comes to mind is to perform a count operation on the DataFrame that has been loaded. This would give us the count of records loaded from the source. In the case of writing data to the store, we would need to load the data into another DataFrame and run a count on it.
But, a count operation on a DataFrame can be expensive. Is there is an alternative? As it turns out, there is. The alternative is to register for Spark events. This is done by extending our class from th
SparkListener class and overriding either the
OnStageCompleted method or the
OnTaskEnd method (depending on what we wish to do).
Whenever an activity is completed, Spark invokes the
OnStageCompleted method on the registered listener. This method allows us to track the execution time and the CPU time taken by the executor. When a task is completed, Spark invokes the
OnTaskEnd method on the Spark listener. This method can be used to determine the number of records read and written.
You may also like: Understanding Apache Spark's Execution Model Using SparkListeners – Part 1.
To keep track of the execution time, count of records read, and count of the records written, I present a few helper classes in this article. To register for the activity completion task, you need to derive your class from the
StageCompletedEventConsumer trait. To register for the read count, you need to derive your class from the
To register for the write count, you need to derive your class from the
RecordsWrittenEventConsumer trait. After deriving classes from the given traits, you need to add the class to the respective manager classes. When the event occurs, Spark will invoke the manager class, which in turn will inform all the registered listeners.
The read event presents an interesting situation. When Spark reads data, the read event is invoked twice - the first time after reading the first record and the second time after loading all the records. In other words, the read event listener is invoked with two values. The first time, the value of records read is one. The second time, the value of records read is the number of records in the data set.
Opinions expressed by DZone contributors are their own.