Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

ArangoDB-Spark Connector

DZone's Guide to

ArangoDB-Spark Connector

Check out this Spark Connector written in Scala that supports loading data between ArangoDB and Spark — complete with code snippets.

· Database Zone
Free Resource

Running out of memory? Learn how Redis Enterprise enables large dataset analysis with the highest throughput and lowest latency while reducing costs over 75%! 

Currently we are diving deeper into the Apache Spark world. We started with an implementation of a Spark-Connector written in Scala. The connector supports loading of data from ArangoDB into Spark and vice-versa. Today, we released a prototype with an aim of including our community in the development process early. Your feedback is more than welcome!

Image title


Setup SparkContext

First, you need to initialize a SparkContext with the configuration for the Spark-Connector and the underlying Java Driver (see the corresponding blog post here) to connect to your ArangoDB server.

Scala

val conf = new SparkConf()
    .set("arangodb.host", "127.0.0.1")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword")
    ...
val sc = new SparkContext(conf)


Java

SparkConf conf = new SparkConf()
    .set("arangodb.host", "127.0.0.1")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword");
    ...
JavaSparkContext sc = new JavaSparkContext(conf);


Load Data From ArangoDB

To load data from ArangoDB, use the function load — from the object ArangoSpark — with the SparkContext, the name of your collection and the type of your bean to load data in. If needed, there is an additional load function with extra read options like the name of the database.

Scala

val rdd = ArangoSpark.load[MyBean](sc, "myCollection")


Java

ArangoJavaRDD<MyBean> rdd = ArangoSpark.load(sc, "myCollection", MyBean.class);


Save Data to ArangoDB

To save data to ArangoDB, use the function save — from the object ArangoSpark — with the SparkContext and the name of your collection. If needed, there is an additional save function with extra write options like the name of the database.

Scala/Java

ArangoSpark.save(rdd, "myCollection")


It would be great if you try it out and give us your feedback.

Running out of memory? Never run out of memory with Redis Enterprise databaseStart your free trial today.

Topics:
arangodb ,apache spark ,database ,connector

Published at DZone with permission of Mark Vollmary, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}