Over a million developers have joined DZone.

Apache Spark With Apache Hive

Today we'll learn about connecting and running Apache Spark Scala code with Apache Hive Hadoop datastore for data warehouse queries from Spark.

· Big Data Zone

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

Hello geeks, we have discussed how to start programming with Spark in Scala. In this blog, we will discuss how we can use Hive with Spark 2.0.

When you start to work with Hive, you need HiveContext (inherits SqlContext), core-site.xml, hdfs-site.xml, and hive-site.xml for Spark. In case you don't configure hive-site.xml then the context automatically creates metastore_db in the current directory and creates warehouse directory indicated by HiveConf(which defaults user/hive/warehouse).

hive-site.xml

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost/metastore_db</value>
        <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
        <description>MySQL JDBC driver class</description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>hivepassword</value>
        <description>password for connecting to mysql server </description>
    </property>
</configuration>

Now if we talk about Spark 2.0, HiveContext, and SqlContext has been deprecated but Spark does provide backward compatibility. They also gave another Common Entry point as SparkSession.

We can create object of SparkSession as :

SparkSession.builder
  .master("local")
  .appName("demo")
  .getOrCreate()

Now we can get sqlContext and sparkContext object and others from SparkSession object.

If anybody wants to work with HiveContext then we need to enable the same as :

val sparkSession = SparkSession.builder
  .master("local")
  .appName("demo")
.enableHiveSupport()
  .getOrCreate()

And Now here we go to execute our Querry:

sparkSession.sqlContext.sql(“INSERT INTO TABLE students VALUES (‘Rahul’,’Kumar’), (‘abc’,’xyz’)”)

Complete Demo Code on Github

Thanks!

Hortonworks DataFlow is an integrated platform that makes data ingestion fast, easy, and secure. Download the white paper now.  Brought to you in partnership with Hortonworks

Topics:
hive ,spark ,scala

Published at DZone with permission of Rahul Kumar, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}