{{announcement.body}}
{{announcement.title}}

How to Use Dynamic Data Transpose in Spark

DZone 's Guide to

How to Use Dynamic Data Transpose in Spark

We take a look at how to use in-memory operators and the Scala language to work with dynamically transposed data in Apache Spark.

· Big Data Zone ·
Free Resource

Dynamic Transpose is a critical transformation in Spark, as it requires a lot of iterations. This article will give you a clear idea of how to handle this complex scenario with in-memory operators.

First, let us see the source data that we have: 

idoc_number,orderid,idoc_qualifier_org,idoc_org
7738,2364,6,0
7738,2364,7,0
7738,2364,8,mystr1
7738,2364,12,mystr2
7739,2365,12,mystr3
7739,2365,7,mystr4

We also have a lookup table for the idoc_qualifier_org column in the Source data records. As the lookup table's size will be less, we can expect it to be in the cache and in the driver memory, as well.

qualifier,desc
6,Division
7,Distribution Channel
8,Sales Org
12,Order type

The expected output of the Dynamic Transpose operation is:

idoc_number,order_id,Division,Distribution Channel,Sales org,Order Type
7738,2364,0,0,mystr1,mystr2
7739,2365,null,mystr3,null,mystr4

The below code will actually transpose the data based on the present column in the data. This code is a different way to work with Transpose Data in Spark. This code rigorously uses the complex data types of Spark and also takes care of the performance of the iterations. 

object DynamicTranspose {
 def dataValidator(map_val: Seq[Map[String, String]], rule: String): String = {
  try {
   val rule_array = rule.split("#!").toList
   val src_map = map_val.toList.flatten.toMap
   var output_str = ""
   rule_array.foreach(f =>
    output_str = output_str + "!" + src_map.getOrElse(f, "#")
   )

   return output_str.drop(1)
  } catch {
   case t:
    Throwable => t.printStackTrace().toString()
    return "0".toString()
  }

 }

 def main(args: Array[String]): Unit = {

  val spark = SparkSession.builder().master("local[*]").config("spark.sql.warehouse.dir", "<src dir>").getOrCreate()
  val data_df = spark.read.option("header", "true").csv("<data path src>")
  val lkp_df = spark.read.option("header", "true").csv("lookup path source>")
  import spark.implicits._
  import org.apache.spark.sql.functions.broadcast

  val lkp_df_brdcast = broadcast(lkp_df)
  val result_df = data_df.join(broadcast(lkp_df_brdcast), $ "idoc_qualifier_org" === $ "qualifier", "inner")

  val df1 = result_df.groupBy(col("idoc_number"), col("orderid")).agg(collect_list(map($ "desc", $ "idoc_org")) as "map")
  import org.apache.spark.sql.functions.udf
  import org.apache.spark.sql.functions. {
   lit,
   max,
   row_number
  }
  import spark.implicits._
  import org.apache.spark.sql.Row
  val map_val = lkp_df.rdd.map(row => row.getString(1)).collect().mkString("#!")
  spark.sparkContext.broadcast(map_val)
  val recdValidator = udf(dataValidator _)
  var latest_df = df1.withColumn("explode_out", split(recdValidator(df1("map"), lit(map_val)), "!")).drop("map")
  val columns = map_val.split("#!").toList
  latest_df = columns.zipWithIndex.foldLeft(latest_df) {
   (memodDF, column) => {
    memodDF.withColumn(column._1, col("explode_out")(column._2))
   }
  }
  .drop("explode_out")
  latest_df.show()
 }

}

Hope this helps!

Topics:
spark 2.0.0 ,big data ,apache spark tutorial ,apache spark tutorial scala

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}