Over a million developers have joined DZone.

Creating a Simple Hive UDF in Scala

DZone's Guide to

Creating a Simple Hive UDF in Scala

If you want to make a UDF for your Hive setup, you usually need to use Java. But instead, you can use Scala and an assembly plugin.

· Java Zone ·
Free Resource

How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.

Sometimes, the query you want to write can’t be expressed easily (or at all) using the built-in functions that Hive provides. By allowing you to write a user-defined function (UDF), Hive makes it easy to plug in your own processing code and invoke it from a Hive query, UDFs have to be written in Java, the language that Hive itself is written in. But in this blog, we will write it in Scala.

A UDF must satisfy the following two properties:

  • A UDF must be a subclass of org.apache.hadoop.hive.ql.exec.UDF.

  • A UDF must implement at least one evaluate() method. The evaluate() method is not defined by an interface, since it may take an arbitrary number of arguments, of arbitrary types, and it may return a value of arbitrary type.

Hive introspects the UDF to find the evaluate() method that matches the Hive function that was invoked.

Let's get started! The Scala version that I am using is Scala 2.11. Now add the following properties in your build.sbt file:

name := "hiveudf_example"

version := "1.0"

scalaVersion := "2.11.1"

unmanagedJars in Compile += file("/usr/lib/hive/lib/hive-exec-2.0.0.jar")

The path in the file is the path of your Hive home. I am hardcoding it, but you can give it your own information. Create your main file as follows:

package com.knoldus.udf

import org.apache.hadoop.hive.ql.exec.UDF

class Scala_Hive_Udf extends UDF {

  def evaluate(str: String): String = {


I am creating a UDF for the trim method in Hive. You can create any method you want, though. But the next task is to create the assembly for your project. Add the sbt assembly plugin to your plugins.sbt file:

logLevel := Level.Warn

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")

The next step is to create a JAR. Go to your sbt console and hit command.

In the sbt console, you can find your JAR inside the target folder, now submit this JAR to Hive as a UDF. First, start Hive using Hive commands and submit the JAR using the ADD JAR command, followed by the path of your JAR.

Logging initialized using configuration in jar:file:/home/knoldus/Documents/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
hive> ADD JAR /home/knoldus/Desktop/opensource/hiveudf_example/target/scala-2.11/hiveudf_example-assembly-1.0.jar
> ;
Added [/home/knoldus/Desktop/opensource/hiveudf_example/target/scala-2.11/hiveudf_example-assembly-1.0.jar] to class path

Create a function with this UDF:

hive> CREATE FUNCTION trim AS 'com.knoldus.udf.Scala_Hive_Udf';
Time taken: 0.47 seconds

Now, we will call this function as below:

hive> select trim(" hello ");
Time taken: 1.304 seconds, Fetched: 1 row(s)

This is the simplest way to create a UDF in Hive, I hope this blog helps! Happy coding!

How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.

scala ,udf ,java ,apache hive ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}