Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Creating a Simple Hive UDF in Scala

DZone's Guide to

Creating a Simple Hive UDF in Scala

If you want to make a UDF for your Hive setup, you usually need to use Java. But instead, you can use Scala and an assembly plugin.

· Java Zone
Free Resource

Navigate the Maze of the End-User Experience and pick up this APM Essential guide, brought to you in partnership with CA Technologies

Sometimes, the query you want to write can’t be expressed easily (or at all) using the built-in functions that Hive provides. By allowing you to write a user-defined function (UDF), Hive makes it easy to plug in your own processing code and invoke it from a Hive query, UDFs have to be written in Java, the language that Hive itself is written in. But in this blog, we will write it in Scala.

A UDF must satisfy the following two properties:

  • A UDF must be a subclass of org.apache.hadoop.hive.ql.exec.UDF.

  • A UDF must implement at least one evaluate() method. The evaluate() method is not defined by an interface, since it may take an arbitrary number of arguments, of arbitrary types, and it may return a value of arbitrary type.

Hive introspects the UDF to find the evaluate() method that matches the Hive function that was invoked.

Let's get started! The Scala version that I am using is Scala 2.11. Now add the following properties in your build.sbt file:

name := "hiveudf_example"

version := "1.0"

scalaVersion := "2.11.1"

unmanagedJars in Compile += file("/usr/lib/hive/lib/hive-exec-2.0.0.jar")


The path in the file is the path of your Hive home. I am hardcoding it, but you can give it your own information. Create your main file as follows:

package com.knoldus.udf

import org.apache.hadoop.hive.ql.exec.UDF

class Scala_Hive_Udf extends UDF {

  def evaluate(str: String): String = {
    str.trim
  }

}


I am creating a UDF for the trim method in Hive. You can create any method you want, though. But the next task is to create the assembly for your project. Add the sbt assembly plugin to your plugins.sbt file:

logLevel := Level.Warn


addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")


The next step is to create a JAR. Go to your sbt console and hit command.

In the sbt console, you can find your JAR inside the target folder, now submit this JAR to Hive as a UDF. First, start Hive using Hive commands and submit the JAR using the ADD JAR command, followed by the path of your JAR.

Logging initialized using configuration in jar:file:/home/knoldus/Documents/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
hive> ADD JAR /home/knoldus/Desktop/opensource/hiveudf_example/target/scala-2.11/hiveudf_example-assembly-1.0.jar
> ;
Added [/home/knoldus/Desktop/opensource/hiveudf_example/target/scala-2.11/hiveudf_example-assembly-1.0.jar] to class path


Create a function with this UDF:

hive> CREATE FUNCTION trim AS 'com.knoldus.udf.Scala_Hive_Udf';
OK
Time taken: 0.47 seconds


Now, we will call this function as below:

hive> select trim(" hello ");
OK
hello
Time taken: 1.304 seconds, Fetched: 1 row(s)
hive>


This is the simplest way to create a UDF in Hive, I hope this blog helps! Happy coding!

Thrive in the application economy with an APM model that is strategic. Be E.P.I.C. with CA APM.  Brought to you in partnership with CA Technologies.

Topics:
scala ,udf ,java ,apache hive ,tutorial

Published at DZone with permission of Anubhav Tarar, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}