Over a million developers have joined DZone.

ArangoDB-Spark Connector

Check out this Spark Connector written in Scala that supports loading data between ArangoDB and Spark — complete with code snippets.

· Database Zone

Sign up for the Couchbase Community Newsletter to stay ahead of the curve on the latest NoSQL news, events, and webinars. Brought to you in partnership with Coucbase.

Currently we are diving deeper into the Apache Spark world. We started with an implementation of a Spark-Connector written in Scala. The connector supports loading of data from ArangoDB into Spark and vice-versa. Today, we released a prototype with an aim of including our community in the development process early. Your feedback is more than welcome!

Image title


Setup SparkContext

First, you need to initialize a SparkContext with the configuration for the Spark-Connector and the underlying Java Driver (see the corresponding blog post here) to connect to your ArangoDB server.

Scala

val conf = new SparkConf()
    .set("arangodb.host", "127.0.0.1")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword")
    ...
val sc = new SparkContext(conf)


Java

SparkConf conf = new SparkConf()
    .set("arangodb.host", "127.0.0.1")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword");
    ...
JavaSparkContext sc = new JavaSparkContext(conf);


Load Data From ArangoDB

To load data from ArangoDB, use the function load — from the object ArangoSpark — with the SparkContext, the name of your collection and the type of your bean to load data in. If needed, there is an additional load function with extra read options like the name of the database.

Scala

val rdd = ArangoSpark.load[MyBean](sc, "myCollection")


Java

ArangoJavaRDD<MyBean> rdd = ArangoSpark.load(sc, "myCollection", MyBean.class);


Save Data to ArangoDB

To save data to ArangoDB, use the function save — from the object ArangoSpark — with the SparkContext and the name of your collection. If needed, there is an additional save function with extra write options like the name of the database.

Scala/Java

ArangoSpark.save(rdd, "myCollection")


It would be great if you try it out and give us your feedback.

The Getting Started with NoSQL Guide will get you hands-on with NoSQL in minutes with no coding needed. Brought to you in partnership with Couchbase.

Topics:
arangodb ,apache spark ,database ,connector

Published at DZone with permission of Mark Vollmary, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}