Over a million developers have joined DZone.

ArangoDB-Spark Connector

DZone's Guide to

ArangoDB-Spark Connector

Check out this Spark Connector written in Scala that supports loading data between ArangoDB and Spark — complete with code snippets.

· Database Zone ·
Free Resource

RavenDB vs MongoDB: Which is Better? This White Paper compares the two leading NoSQL Document Databases on 9 features to find out which is the best solution for your next project.  

Currently we are diving deeper into the Apache Spark world. We started with an implementation of a Spark-Connector written in Scala. The connector supports loading of data from ArangoDB into Spark and vice-versa. Today, we released a prototype with an aim of including our community in the development process early. Your feedback is more than welcome!

Image title

Setup SparkContext

First, you need to initialize a SparkContext with the configuration for the Spark-Connector and the underlying Java Driver (see the corresponding blog post here) to connect to your ArangoDB server.


val conf = new SparkConf()
    .set("arangodb.host", "")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword")
val sc = new SparkContext(conf)


SparkConf conf = new SparkConf()
    .set("arangodb.host", "")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword");
JavaSparkContext sc = new JavaSparkContext(conf);

Load Data From ArangoDB

To load data from ArangoDB, use the function load — from the object ArangoSpark — with the SparkContext, the name of your collection and the type of your bean to load data in. If needed, there is an additional load function with extra read options like the name of the database.


val rdd = ArangoSpark.load[MyBean](sc, "myCollection")


ArangoJavaRDD<MyBean> rdd = ArangoSpark.load(sc, "myCollection", MyBean.class);

Save Data to ArangoDB

To save data to ArangoDB, use the function save — from the object ArangoSpark — with the SparkContext and the name of your collection. If needed, there is an additional save function with extra write options like the name of the database.


ArangoSpark.save(rdd, "myCollection")

It would be great if you try it out and give us your feedback.

Get comfortable using NoSQL in a free, self-directed learning course provided by RavenDB. Learn to create fully-functional real-world programs on NoSQL Databases. Register today.

arangodb ,apache spark ,database ,connector

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}