Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Apache Spark With Apache Hive

DZone's Guide to

Apache Spark With Apache Hive

Today we'll learn about connecting and running Apache Spark Scala code with Apache Hive Hadoop datastore for data warehouse queries from Spark.

· Big Data Zone
Free Resource

Learn best practices according to DataOps. Download the free O'Reilly eBook on building a modern Big Data platform.

Hello geeks, we have discussed how to start programming with Spark in Scala. In this blog, we will discuss how we can use Hive with Spark 2.0.

When you start to work with Hive, you need HiveContext (inherits SqlContext), core-site.xml, hdfs-site.xml, and hive-site.xml for Spark. In case you don't configure hive-site.xml then the context automatically creates metastore_db in the current directory and creates warehouse directory indicated by HiveConf(which defaults user/hive/warehouse).

hive-site.xml

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost/metastore_db</value>
        <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
        <description>MySQL JDBC driver class</description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hiveuser</value>
    <description>user name for connecting to mysql server </description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>hivepassword</value>
        <description>password for connecting to mysql server </description>
    </property>
</configuration>

Now if we talk about Spark 2.0, HiveContext, and SqlContext has been deprecated but Spark does provide backward compatibility. They also gave another Common Entry point as SparkSession.

We can create object of SparkSession as :

SparkSession.builder
  .master("local")
  .appName("demo")
  .getOrCreate()

Now we can get sqlContext and sparkContext object and others from SparkSession object.

If anybody wants to work with HiveContext then we need to enable the same as :

val sparkSession = SparkSession.builder
  .master("local")
  .appName("demo")
.enableHiveSupport()
  .getOrCreate()

And Now here we go to execute our Querry:

sparkSession.sqlContext.sql(“INSERT INTO TABLE students VALUES (‘Rahul’,’Kumar’), (‘abc’,’xyz’)”)

Complete Demo Code on Github

Thanks!

Find the perfect platform for a scalable self-service model to manage Big Data workloads in the Cloud. Download the free O'Reilly eBook to learn more.

Topics:
hive ,spark ,scala

Published at DZone with permission of Rahul Kumar, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}