DZone
Big Data Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Big Data Zone > Connect Apache Spark SQL to Node.js on Linux via JDBC Driver

Connect Apache Spark SQL to Node.js on Linux via JDBC Driver

This tutorial explains how to access Apache Spark SQL data from a Node.js application using DataDirect Apache Spark SQL JDBC driver on a Linux machine/server.

Saikrishna Teja Bobba user avatar by
Saikrishna Teja Bobba
·
Aug. 04, 16 · Big Data Zone · Tutorial
Like (2)
Save
Tweet
7.34K Views

Join the DZone community and get the full member experience.

Join For Free

This tutorial explains how to access Apache Spark SQL data from a Node.js application using DataDirect Apache Spark SQL JDBC driver on a Linux machine/server.

Apache Spark is changing the way Big Data is accessed and processed. While MapReduce was a good implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster, it was not optimized for interactive data analysis that involves iterative algorithms. Spark was designed to overcome this shortcoming.

As you implement Apache Spark in your organization, we understand that you need ways to connect your Apache Spark to other JDBC applications. Apache Spark SQL allows you to connect with any JDBC data source. We put together a tutorial that explains how you can connect to a Node.js application on Linux using a Spark SQL JDBC driver.

If you are looking to connect to a Node.js ODBC application using a Spark SQL ODBC driver, visit this tutorial.

Before You Start

  1. Make sure that you have Java installed on your machine. You can check this by running the command java -version on your terminal.
  2. If Java is installed, the current version of Java will be displayed. If not, please install Java before proceeding to next steps.
  3. Make sure you have Apache Spark SQL installed on your machine. You can download and install Apache Spark SQL pre-built for Hadoop here.

Install DataDirect Spark SQL JDBC Driver

  1. Download the DataDirect Spark SQL JDBC driver from here.
  2. Extract the contents from the downloaded package by opening a terminal at the downloaded location and running the following command:
  3. unzip PROGRESS_DATADIRECT_JDBC_SPARKSQL_x.x.x.zip
  4. To install the driver, you have to execute the .jar package by running the following command in terminal: java -jar PROGRESS_DATADIRECT_JDBC_INSTALL.jar
  5. This will launch an interactive Java installer, which you can use to install the Spark SQL JDBC driver to your desired location as either a licensed or evaluation installation.

Load Data Into Spark SQL

  1. For the purpose of the tutorial, I will be loading the data from a CSV file that can be found here.
  2. Start the Spark shell using the following command, which has been configured to run the Thrift server in single-session mode as I am only going to register the imported data as Temporary table. I am also including a package that can be used to import data from the CSV, as it is not supported natively:spark-shell --conf spark.sql.hive.thriftServer.singleSession=true --packages com.databricks:spark-csv_2.11:1.4.0
  3. Once the Spark shell starts successfully, run the following commands to import the data from the CSV and register it as temporary table:
import org.apache.spark.sql._
import org.apache.spark.sql.hive._
import org.apache.spark.sql.hive.thriftserver._
//Read from CSV
val df = sqlContext.read.format("com.databricks.spark.csv").option("inferSchema","true").option("header","true").load("/path/to/InsuranceData.csv")
//Check if CSV was imported successfully
df.printSchema()
df.count()
//Register Temp Table
df.registerTempTable("InsuranceData")
sqlContext.sql("select count(*) from InsuranceData").show()
val hc = sqlContext.asInstanceOf[HiveContext]
HiveThriftServer2.startWithContext(hc)


Connect to Your Data From Node.js

  1. In your Node.js application, install the module node-jdbc using npm. Read this page on how to install node-jdbc module.
  2. Copy the SparkSQL JDBC driver from /install_dir/Progress/JDBC_XX/lib to your project library.
  3. You can now access the data from Spark SQL using DataDirect Spark SQL JDBC driver by loading the JDBC module in your code. The following code snippet demonstrates on how you can do it: 
var JDBC = require('jdbc');
var jinst = require('jdbc/lib/jinst');
var asyncjs = require('async');
if (!jinst.isJvmCreated()) {
    jinst.addOption("-Xrs");
    jinst.setupClasspath(['./path/to/sparksql.jar']);
}
var config = {
    // SparkSQL configuration to your server
    url: 'jdbc:datadirect:sparksql://<;hostname>:<port>;DatabaseName=default',
    drivername: 'com.ddtek.jdbc.sparksql.SparkSQLDriver',
    minpoolsize: 1,
    maxpoolsize: 100,
    user: 'username',
    password: 'password',
    properties: {}
};
var sparksqldb = new JDBC(config);
//initialize
sparksqldb.initialize(function(err) {
    if (err) {
        console.log(err);
    }
});

sparksqldb.reserve(function(err, connObj) {
    if (connObj) {
        console.log("Using connection: " + connObj.uuid);
        var conn = connObj.conn;

        // Query the database.
        asyncjs.series([
        function(callback) {
            // Select statement example.
            conn.createStatement(function(err, statement) {
                if (err) {
                    callback(err);
                } else {
                    statement.setFetchSize(100, function(err) {
                        if (err) {
                            callback(err);
                        } else {
                            //Execute a query
                            statement.executeQuery("SELECT * FROM InsuranceData;",
                            function(err, resultset) {
                                if (err) {
                                    callback(err)
                                } else {
                                    resultset.toObjArray(function(err, results) {
                                        //Printing number of records
                                        if (results.length > 0) {
                                            console.log("Record count: " + results.length);
                                        }
                                        callback(null, resultset);
                                    });
                                }
                            });
                        }
                    });
                }
            });
        },
        ], function(err, results) {
            // Results can also be processed here.
            // Release the connection back to the pool.
            sparksqldb.release(connObj, function(err) {
                if (err) {
                    console.log(err.message);
                }
            });
        });
    }
});


4. Notice the method setupClasspath, where you would have to give a path to the DataDirect SparkSQL JDBC driver. When you run the above code, it should print the count of records in your temporary table to console.

Apache Spark Java Database Connectivity sql Driver (software) Node.js Big data Linux (operating system) Database application Java (programming language) AI

Published at DZone with permission of Saikrishna Teja Bobba, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • The Essential Web3 Tools and Technologies Developers Must Know
  • What Is the Difference Between SAST, DAST, and IAST?
  • The Need for a Kubernetes Alternative
  • Geo What? A Quick Introduction to Geo-Distributed Apps

Comments

Big Data Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo