DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • USA PATRIOT Act vs SecNumCloud: Which Model for the Future?
  • The Evolution of Adaptive Frameworks
  • Unleashing the Power of Gemini With LlamaIndex
  • Release Management Risk Mitigation Strategies in Data Warehouse Deployments

Trending

  • Testing SingleStore's MCP Server
  • Unlocking the Benefits of a Private API in AWS API Gateway
  • Unlocking the Potential of Apache Iceberg: A Comprehensive Analysis
  • Unlocking AI Coding Assistants Part 3: Generating Diagrams, Open API Specs, And Test Data
  1. DZone
  2. Data Engineering
  3. Data
  4. Writing a Scala/Spark UDF: Options to Consider

Writing a Scala/Spark UDF: Options to Consider

By 
Bipin Patwardhan user avatar
Bipin Patwardhan
·
May. 12, 20 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
16.3K Views

Join the DZone community and get the full member experience.

Join For Free

A couple of weeks ago, at my work place, I wrote a metadata-driven data validation framework for Spark. After the initial euphoria of having created the framework in Scala/Spark and Python/Spark, I started reviewing the framework. During the review, I noted that the User Defined Functions (UDF) I had written were prone to throw an error in certain situations.

I then explored various options to make the UDFs fail-safe. Let us start by considering the data as below

Plain Text
xxxxxxxxxx
1
 
1
name,date,super-name,alien-name,sex,media-type,franchise,planet,alien,alien-planet,side-kick
2
peter parker,22/03/1970,spiderman,,m,comic,marvel,earth,n,none,none
3
clark kent,14/09/1985,superman,kal el,m,comic,dc,earth,y,krypton,
4
bruce wayne,12/12/2000,batman,,m,comic,dc,earth,n,,Robin
5
Natasha Romanoff,06/04/1982,black widow,,f,movie,marvel,earth,n,none,
6
Carol Susan Jane Danvers,1982-04-01,Captain Marvel,,f,comic,marvel,earth,n,none,


Let us read the data into a dataframe, as below

Scala
xxxxxxxxxx
1
 
1
import org.apache.spark.sql.expressions.UserDefinedFunction
2
import org.apache.spark.sql.functions.{col, udf}
3
4
import spark.implicits._
5
6
val df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("super-heroes.csv")
7
df.show


For this data set, let us assume that we want to check if the name of the superhero is "kal el". Let us also assume that we are going to implement this check using a UDF.

Option A

The most obvious method of doing so is shown below:

Scala
xxxxxxxxxx
1
12
 
1
def isAlienName(data: String): String = {
2
  if ( data.equalsIgnoreCase("kal el") ) {
3
    "yes"
4
  } else {
5
    "no"
6
  }
7
}
8
9
val isAlienNameUDF = udf(isAlienName _)
10
11
val df1 = df.withColumn("df1", isAlienNameUDF(col("alien-name")))
12
df1.show


When we apply the isAlienNameUDF method, it works for all cases where the column value is not null. If the value of the cell passed to the UDF is null, it throws an exception: org.apache.spark.SparkException: Failed to execute user defined 
function 

This is because we are executing the method equalsIgnoreCase on a null value.

Option B

To overcome the problem of Option A, we can modify the UDF as follows

Scala
xxxxxxxxxx
1
12
 
1
def isAlienName2(data: String): String = {
2
  if ( "kal el".equalsIgnoreCase(data) ) {
3
    "yes"
4
  } else {
5
    "no"
6
  }
7
}
8
9
val isAlienNameUDF2 = udf(isAlienName2 _)
10
11
val df2 = df.withColumn("df2", isAlienNameUDF2(col("alien-name")))
12
df2.show


Option C

Instead of checking for null in the UDF or writing the UDF code to avoid a NullPointerException, Spark provides a method that allows us to perform a null check right at the place where the UDF is executed, as below

val df4 = df.withColumn("df4", isAlienNameUDF2(when(col("alien-name").
isNotNull,col("alien-name")).otherwise(lit("xyz")))) df4.show 

In this case, we check the value of the column. If the value is not null, we pass the value of the column. Otherwise, we pass a default value to the UDF.

Option D

In option C, irrespective of the value of the column, we are invoking the UDF. We can avoid this by changing the order of 'when' and 'otherwise', as follows:

val df5 = df.withColumn("df5", when(col("alien-name").isNotNull, 
isAlienNameUDF2(col("alien-name"))).otherwise(lit("xyz"))) df5.show 

In this option, the UDF is invoked only if the column value is not null. If the column value is null, we use a default value.

Summary

At this point in time, I believe that option D should be the preferred option when writing a UDF.

Column (database) Data (computing) Plain text Data validation Scala (programming language) Framework Pass (software)

Opinions expressed by DZone contributors are their own.

Related

  • USA PATRIOT Act vs SecNumCloud: Which Model for the Future?
  • The Evolution of Adaptive Frameworks
  • Unleashing the Power of Gemini With LlamaIndex
  • Release Management Risk Mitigation Strategies in Data Warehouse Deployments

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!