Writing a Scala/Spark UDF: Options to Consider
Join the DZone community and get the full member experience.
Join For FreeA couple of weeks ago, at my work place, I wrote a metadata-driven data validation framework for Spark. After the initial euphoria of having created the framework in Scala/Spark and Python/Spark, I started reviewing the framework. During the review, I noted that the User Defined Functions (UDF) I had written were prone to throw an error in certain situations.
I then explored various options to make the UDFs fail-safe. Let us start by considering the data as below
xxxxxxxxxx
name,date,super-name,alien-name,sex,media-type,franchise,planet,alien,alien-planet,side-kick
peter parker,22/03/1970,spiderman,,m,comic,marvel,earth,n,none,none
clark kent,14/09/1985,superman,kal el,m,comic,dc,earth,y,krypton,
bruce wayne,12/12/2000,batman,,m,comic,dc,earth,n,,Robin
Natasha Romanoff,06/04/1982,black widow,,f,movie,marvel,earth,n,none,
Carol Susan Jane Danvers,1982-04-01,Captain Marvel,,f,comic,marvel,earth,n,none,
Let us read the data into a dataframe, as below
xxxxxxxxxx
import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.functions.{col, udf}
import spark.implicits._
val df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("super-heroes.csv")
df.show
For this data set, let us assume that we want to check if the name of the superhero is "kal el". Let us also assume that we are going to implement this check using a UDF.
Option A
The most obvious method of doing so is shown below:
xxxxxxxxxx
def isAlienName(data: String): String = {
if ( data.equalsIgnoreCase("kal el") ) {
"yes"
} else {
"no"
}
}
val isAlienNameUDF = udf(isAlienName _)
val df1 = df.withColumn("df1", isAlienNameUDF(col("alien-name")))
df1.show
When we apply the isAlienNameUDF method, it works for all cases where the column value is not null. If the value of the cell passed to the UDF is null, it throws an exception: org.apache.spark.SparkException: Failed to execute user defined
function
This is because we are executing the method equalsIgnoreCase on a null value.
Option B
To overcome the problem of Option A, we can modify the UDF as follows
xxxxxxxxxx
def isAlienName2(data: String): String = {
if ( "kal el".equalsIgnoreCase(data) ) {
"yes"
} else {
"no"
}
}
val isAlienNameUDF2 = udf(isAlienName2 _)
val df2 = df.withColumn("df2", isAlienNameUDF2(col("alien-name")))
df2.show
Option C
Instead of checking for null in the UDF or writing the UDF code to avoid a NullPointerException, Spark provides a method that allows us to perform a null check right at the place where the UDF is executed, as belowval df4 = df.withColumn("df4", isAlienNameUDF2(when(col("alien-name").
isNotNull,
col("alien-name")).otherwise(lit("xyz")))) df4.show
In this case, we check the value of the column. If the value is not null, we pass the value of the column. Otherwise, we pass a default value to the UDF.
Option D
In option C, irrespective of the value of the column, we are invoking the UDF. We can avoid this by changing the order of 'when' and 'otherwise', as follows:val df5 = df.withColumn("df5", when(col("alien-name").isNotNull,
isAlienNameUDF2(col("alien-name"))).otherwise(lit("xyz"))) df5.show
In this option, the UDF is invoked only if the column value is not null. If the column value is null, we use a default value.
Summary
At this point in time, I believe that option D should be the preferred option when writing a UDF.
Opinions expressed by DZone contributors are their own.
Comments