Check out this Spark Connector written in Scala that supports loading data between ArangoDB and Spark — complete with code snippets.
Join the DZone community and get the full member experience.Join For Free
Sign up for the Couchbase Community Newsletter to stay ahead of the curve on the latest NoSQL news, events, and webinars. Brought to you in partnership with Coucbase.
Currently we are diving deeper into the Apache Spark world. We started with an implementation of a Spark-Connector written in Scala. The connector supports loading of data from ArangoDB into Spark and vice-versa. Today, we released a prototype with an aim of including our community in the development process early. Your feedback is more than welcome!
val conf = new SparkConf() .set("arangodb.host", "127.0.0.1") .set("arangodb.port", "8529") .set("arangodb.user", "myUser") .set("arangodb.password", "myPassword") ... val sc = new SparkContext(conf)
SparkConf conf = new SparkConf() .set("arangodb.host", "127.0.0.1") .set("arangodb.port", "8529") .set("arangodb.user", "myUser") .set("arangodb.password", "myPassword"); ... JavaSparkContext sc = new JavaSparkContext(conf);
Load Data From ArangoDB
To load data from ArangoDB, use the function load — from the object ArangoSpark — with the SparkContext, the name of your collection and the type of your bean to load data in. If needed, there is an additional load function with extra read options like the name of the database.
val rdd = ArangoSpark.load[MyBean](sc, "myCollection")
ArangoJavaRDD<MyBean> rdd = ArangoSpark.load(sc, "myCollection", MyBean.class);
Save Data to ArangoDB
To save data to ArangoDB, use the function save — from the object ArangoSpark — with the SparkContext and the name of your collection. If needed, there is an additional save function with extra write options like the name of the database.
It would be great if you try it out and give us your feedback.
Published at DZone with permission of Mark Vollmary, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.