DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report

ArangoDB-Spark Connector

Check out this Spark Connector written in Scala that supports loading data between ArangoDB and Spark — complete with code snippets.

Mark Vollmary user avatar by
Mark Vollmary
·
Oct. 27, 16 · Code Snippet
Like (2)
Save
Tweet
Share
3.88K Views

Join the DZone community and get the full member experience.

Join For Free

Currently we are diving deeper into the Apache Spark world. We started with an implementation of a Spark-Connector written in Scala. The connector supports loading of data from ArangoDB into Spark and vice-versa. Today, we released a prototype with an aim of including our community in the development process early. Your feedback is more than welcome!

Image title


Setup SparkContext

First, you need to initialize a SparkContext with the configuration for the Spark-Connector and the underlying Java Driver (see the corresponding blog post here) to connect to your ArangoDB server.

Scala

val conf = new SparkConf()
    .set("arangodb.host", "127.0.0.1")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword")
    ...
val sc = new SparkContext(conf)


Java

SparkConf conf = new SparkConf()
    .set("arangodb.host", "127.0.0.1")
    .set("arangodb.port", "8529")
    .set("arangodb.user", "myUser")
    .set("arangodb.password", "myPassword");
    ...
JavaSparkContext sc = new JavaSparkContext(conf);


Load Data From ArangoDB

To load data from ArangoDB, use the function load — from the object ArangoSpark — with the SparkContext, the name of your collection and the type of your bean to load data in. If needed, there is an additional load function with extra read options like the name of the database.

Scala

val rdd = ArangoSpark.load[MyBean](sc, "myCollection")


Java

ArangoJavaRDD<MyBean> rdd = ArangoSpark.load(sc, "myCollection", MyBean.class);


Save Data to ArangoDB

To save data to ArangoDB, use the function save — from the object ArangoSpark — with the SparkContext and the name of your collection. If needed, there is an additional save function with extra write options like the name of the database.

Scala/Java

ArangoSpark.save(rdd, "myCollection")


It would be great if you try it out and give us your feedback.

Connector (mathematics)

Published at DZone with permission of Mark Vollmary, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Introduction to Containerization
  • Best Navicat Alternative for Windows
  • Custom Validators in Quarkus
  • Use AWS Controllers for Kubernetes To Deploy a Serverless Data Processing Solution With SQS, Lambda, and DynamoDB

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: