Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Apache Spark With Apache Hive

DZone's Guide to

Apache Spark With Apache Hive

Today we'll learn about connecting and running Apache Spark Scala code with Apache Hive Hadoop datastore for data warehouse queries from Spark.

· Big Data Zone ·
Free Resource

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

Hello geeks, we have discussed how to start programming with Spark in Scala. In this blog, we will discuss how we can use Hive with Spark 2.0.

When you start to work with Hive, you need HiveContext (inherits SqlContext), core-site.xml, hdfs-site.xml, and hive-site.xml for Spark. In case you don't configure hive-site.xml then the context automatically creates metastore_db in the current directory and creates warehouse directory indicated by HiveConf(which defaults user/hive/warehouse).

hive-site.xml

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost/metastore_db</value>
        <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
        <description>MySQL JDBC driver class</description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>hivepassword</value>
        <description>password for connecting to mysql server </description>
    </property>
</configuration>

Now if we talk about Spark 2.0, HiveContext, and SqlContext has been deprecated but Spark does provide backward compatibility. They also gave another Common Entry point as SparkSession.

We can create object of SparkSession as :

SparkSession.builder
  .master("local")
  .appName("demo")
  .getOrCreate()

Now we can get sqlContext and sparkContext object and others from SparkSession object.

If anybody wants to work with HiveContext then we need to enable the same as :

val sparkSession = SparkSession.builder
  .master("local")
  .appName("demo")
.enableHiveSupport()
  .getOrCreate()

And Now here we go to execute our Querry:

sparkSession.sqlContext.sql(“INSERT INTO TABLE students VALUES (‘Rahul’,’Kumar’), (‘abc’,’xyz’)”)

Complete Demo Code on Github

Thanks!

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

Topics:
hive ,spark ,scala

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}