Using Spark Listeners
Join the DZone community and get the full member experience.
Join For FreeIn the last quarter of 2019, I developed a meta-data driven, ingestion engine using Spark. The framework /library has multiple patterns to cater to multiple source and destination combinations. For example, two patterns are available for loading flat files to cloud storage (one to load data to AWS S3 and another to load data to Azure Blob).
As data loading philosophies have changed from Extract-Transform-Load (ETL) to Extract-Load-Transform (ETL), such a framework is very useful, as it reduces the time needed to set up ingestion jobs.
An important aspect of any ingestion engine is to know how many records have been read from a given source and written to a destination. Normally, the approach that comes to mind is to perform a count operation on the DataFrame that has been loaded. This would give us the count of records loaded from the source. In the case of writing data to the store, we would need to load the data into another DataFrame and run a count on it.
But, a count operation on a DataFrame can be expensive. Is there is an alternative? As it turns out, there is. The alternative is to register for Spark events. This is done by extending our class from th SparkListener
class and overriding either the OnStageCompleted
method or the OnTaskEnd
method (depending on what we wish to do).
Whenever an activity is completed, Spark invokes the OnStageCompleted
method on the registered listener. This method allows us to track the execution time and the CPU time taken by the executor. When a task is completed, Spark invokes the OnTaskEnd
method on the Spark listener. This method can be used to determine the number of records read and written.
To keep track of the execution time, count of records read, and count of the records written, I present a few helper classes in this article. To register for the activity completion task, you need to derive your class from the StageCompletedEventConsumer
trait. To register for the read count, you need to derive your class from the RecordsLoadedEventConsumer
trait.
To register for the write count, you need to derive your class from the RecordsWrittenEventConsumer
trait. After deriving classes from the given traits, you need to add the class to the respective manager classes. When the event occurs, Spark will invoke the manager class, which in turn will inform all the registered listeners.
import java.util.Properties
import org.apache.spark.SparkContext
import org.apache.spark.sql.{DataFrame, SparkSession, SQLContext}
import org.apache.spark.scheduler.{SparkListener, SparkListenerTaskEnd, SparkListenerStageCompleted}
import org.apache.spark.scheduler._
import scala.collection.mutable.
val spatkContext = sc
val sparkSession = spark
val sqlContext = sparkSession.sqlContext
trait StageCompletedEventConsumer {
def execute(executotRunTime: Long, executorCPUTime: Long)
}
class StageCompletionManager extends SparkListener
{
var consumerMap: scala.collection.mutable.Map[String, StageCompletedEventConsumer] = scala.collection.mutable.Map[String, StageCompletedEventConsumer)()
def addEventConsumer(SparkContext: SparkContext, id: String, consumer: StageCompletedEventConsumer)
{
consumerMap += (id -> consumer)
}
def removeEventConsumcr(id: String)
{
consumerMap -= id
}
override def onStageCompleted(stageCompleted: SparkListenerStageCompleted): Unit =
{
for ( (k, v) <- consumerMap ) {
if ( v != null ) {
v.execute(stageCompleted.stageInfo.taskMetcics.executionRunTime, stageCompleted.stageInfo.taskMetcics.executorCpuTime)
}
}
}
}
trait RecordsLoadedEventConsumer {
def execute(recordsRead: Long)
}
class RecordsLoadedManager extends SparkListener
{
var consumerMap: scala.collection.mutable.Map[String, RecordsLoadedEventConsumer] = scala.collection.mutable.Map[String, RecordsLoadedEventConsumer)()
def addEventConsumer(SparkContext: SparkContext, id: String, consumer: RecordsLoadedEventConsumer)
{
consumerMap += (id -> consumer)
}
def removeEventConsumer(id: String)
{
consumerMap -= id
}
override def onTaskEnd(stageCompleted: SparkListenerTaskEnd): Unit =
{
val recordsRead = taskEnd.taskMetrics.inputMetrics.recordsRead
for ( (k, v) <- consumerMap ) {
if ( v != null ) {
v.execute(recordsRead)
}
}
}
}
trait RecordsWrittenEventConsumer {
def execute(recordsWritten: Long)
}
class RecordsWrittenManager extends SparkListener
{
var consumerMap: scala.collection.mutable.Map[String, RecordsWrittenEventConsumer] = scala.collection.mutable.Map[String, RecordsWrittenEventConsumer)()
def addEventConsumer(SparkContext: SparkContext, id: String, consumer: RecordsWrittenEventConsumer)
{
consumerMap += (id -> consumer)
}
def removeEventConsumer(id: String)
{
consumerMap -= id
}
override def onTaskEnd(stageCompleted: SparkListenerTaskEnd): Unit =
{
val recordsWritten = taskEnd.taskMetrics.outputMetrics.recordsWritten
for ( (k, v) <- consumerMap ) {
if ( v != null ) {
v.execute(recordsWritten)
}
}
}
}
class Consumer1 extends RecordsLoadedEventConsumer
{
override def execute (recordsRead: Long) {
println("Consumer 1: " + recordsRead.toString)
}
}
class Consumer2 extends RecordsLoadedEventConsumer
{
override def execute(recordsRead: Long) {
println("Consumer 2 : " + recordsRead.toString)
}
)
class Consumer3 extends StageCompletedEventConsumer
{
override def execute(executorRunTime: Long, executorRunTime: Long)
{
println ("Consumer 3: " + executorRunTime.toString + ", " + executorCPUTime.tostring)
}
}
val cl: Consumer1 = new Consumer1
val c2: Consumer2 = new Consumer2
val c3: Consumer3 = new Consumer3
val rm: RecordsLoadedManager = new RecordsLoadedManager
sparkContext.addSparkListener(rm)
rm.addEventConsumer(sparkContext, "cl", c1)
rm.addEventConsumer(sparkContext, "c2", c2)
val sm: StageCompletionManager = new StageCompletionManager
sparkContext.addSparkListene(sm)
sm.addEventConsumer(sparkContext, "c3", c3)
val inputPath = "stations.csv"
val df = spackSession.read.format("csv").option("header". "true").option("sep", ",").option("inferSchema", "true").csv(inputPath)
rm.removeEventConsuaer("c2")
val df = sparkSession.read.format("csv").option("header", "true").option(sep, ",").option("inferSchema", "true").csv(inputPath)
The read event presents an interesting situation. When Spark reads data, the read event is invoked twice - the first time after reading the first record and the second time after loading all the records. In other words, the read event listener is invoked with two values. The first time, the value of records read is one. The second time, the value of records read is the number of records in the data set.
Further Reading
Opinions expressed by DZone contributors are their own.
Comments