{{announcement.body}}
{{announcement.title}}

How to Use Reverse Transpose in Spark

DZone 's Guide to

How to Use Reverse Transpose in Spark

In this article, we take a look at how to reverse transpose sample data based on given key columns. Read on to get started!

· Big Data Zone ·
Free Resource

In a previous article, I shared how to perform data transposition using complex data types in Apache Spark. In this article, we will reverse transpose sample data based on given key columns. This code could be generic, based on the key columns. One important point to remeber for this implementation is that, as we need to explore the data based on the denormalized column count, the data volume in each executor will go up abruptly. So we need to caclulate present data volume on each partition and the expected rise of the data in each partition. A developer needs to repartition the data in a way that the Executor will not get an OOM exception. So, first, let us look into the sample data:

id|country|product_line_item|Product_wing|Division|region|territory|item_id|unique_id|Store_name|Item1|Item2|Item3|Item4|Item5|Item6|Item7|Item8|Item9|Item10
1|India|Product-C|(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9|1000655|Queen of Himalaya|750|850|1500|150|1000||2900|2500|1200|5050
1|India|Product-A|(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A)|HM[Terr](A)|12|1000416|Snow Peaks|765|900|||1200|1440|3500|2600||6000

For this particular sample file the key columns are id, country, product_line_item, Product_wing, Division, region, territory, item_id, unique_id, Store_name. Other than these key columns, every other field should be normalized and repeating with key rows. The code looks like this :

object ReverseTranspose {
	def main(args: Array[String]): Unit = {
		System.setProperty("hadoop.home.dir", "C://winutils//")
		val spark = SparkSession.builder().master("local[*]").config("spark.sql.warehouse.dir", "<warehouse dir>").getOrCreate()
		val data_df =spark.read.option("header","true").option("delimiter", "|").option("mode", "DROPMALFORMED").csv("<src dir>")
		data_df.show(false)
		val str="id,country,product_line_item,Product_wing,Division,region,territory,item_id,unique_id,Store_name"
		import spark.implicits._
		val mapKeys = data_df.columns.diff(str.split("\\,"))
		import org.apache.spark.sql.Dataset
		val pairs = mapKeys.map(k => Seq(lit(k), col(k))).flatten
		val mapped = data_df.select($"id",$"country",$"product_line_item",$"Product_wing",$"Division",$"region",$"territory",$"item_id",$"unique_id",$"Store_name", functions.map(pairs:_*) as "map")
		var final_df=mapped.select($"id",$"country",$"product_line_item",$"Product_wing",$"Division",$"region",$"territory",$"item_id",$"unique_id",$"Store_name", explode($"map"))
		final_df.show(false)
	}
}

The output of the code looks like this:

+---+-------+-----------------+---------------------+-------------------+-------------------+-----------+-------+---------+-----------------+------+-----+
|id |country|product_line_item|Product_wing |Division |region |territory |item_id|unique_id|Store_name |key |value|
+---+-------+-----------------+---------------------+-------------------+-------------------+-----------+-------+---------+-----------------+------+-----+
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item1 |750 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item2 |850 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item3 |1500 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item4 |150 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item5 |1000 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item6 |null |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item7 |2900 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item8 |2500 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item9 |1200 |
|1 |India |Product-C |(Product-C)India_West|India_North[DIV](C)|Uttarakhand[Reg](C)|UT[Terr](C)|9 |1000655 |Queen of Himalaya|Item10|5050 |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item1 |765 |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item2 |900 |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item3 |null |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item4 |null |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item5 |1200 |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item6 |1440 |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item7 |3500 |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item8 |2600 |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item9 |null |
|1 |India |Product-A |(Product-A)India_West|India_North[DIV](A)|Himachal[Reg](A) |HM[Terr](A)|12 |1000416 |Snow Peaks |Item10|6000 |
+---+-------+-----------------+---------------------+-------------------+-------------------+-----------+-------+---------+-----------------+------+-----+

The performance of this code looks good to me on 100 MB data on my local machine. Hope this helps in other use cases.

Topics:
scala tutorial ,apache spark tutorial ,data transposition tutorial ,big data

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}