Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Transferring Data From Cassandra to Couchbase Using Spark

DZone's Guide to

Transferring Data From Cassandra to Couchbase Using Spark

Started off with Cassandra only to realize that Couchbase suits your needs more? This Spark plugin can help you transfer your data to Couchbase quickly and easily.

· Database Zone
Free Resource

Traditional relational databases weren’t designed for today’s customers. Learn about the world’s first NoSQL Engagement Database purpose-built for the new era of customer experience.

There are many NoSQL databases in the market like Cassandra, MongoDB, Couchbase, and others, and each have pros and cons.

Types of NoSQL Databases

There are mainly four types of NoSQL databases, namely:

  1. Column-oriented
  2. Key-value store
  3. Document-oriented
  4. Graph

The databases that support more than one format are called “multi-model,” like Couchbase which supports key-value and document-oriented databases.

Sometimes we choose the wrong database for our application and realize this harsh truth at a later stage.

Then what? What should we do?

Such is the case in our experience, where we were using Cassandra as our database and later discovered it is not fulfilling all of our needs. We needed to find a new database and discovered Couchbase to be the right fit.

The main difficulty was figuring out how we should transfer our data from Cassandra to Couchbase, because no such plugin was available.

In this blog post I’ll be describing the code I wrote that transfers data from Cassandra to Couchbase using Spark.

All of the code is available here.

Explanation of the code

Here, I am reading data from Cassandra and writing it back on Couchbase. This simple code solves our problem.

The steps involved are:

Reading the configuration:

val config = ConfigFactory.load()
//Couchbase Configuration
val bucketName = config.getString("couchbase.bucketName")
val couchbaseHost = config.getString("couchbase.host")
//Cassandra Configuration
val keyspaceName = config.getString("cassandra.keyspaceName")
val tableName = config.getString("cassandra.tableName")
val idFeild = config.getString("cassandra.idFeild")
val cassandraHost = config.getString("cassandra.host")
val cassandraPort = config.getInt("cassandra.port")


Setting up the Spark configuration and the creation of the Spark session:

val conf = new SparkConf()
    .setAppName(s"CouchbaseCassandraTransferPlugin")
    .setMaster("local[*]")
    .set(s"com.couchbase.bucket.$bucketName", "")
    .set("com.couchbase.nodes", couchbaseHost)
    .set("spark.cassandra.connection.host", cassandraHost)
    .set("spark.cassandra.connection.port", cassandraPort.toString)
val spark = SparkSession.builder().config(conf).getOrCreate()
val sc = spark.sparkContext


Reading data from Cassandra:

val cassandraRDD = spark.read
    .format("org.apache.spark.sql.cassandra")
    .options(Map("table" -> tableName, "keyspace" -> keyspaceName))
    .load()


Checking the id field:

The id field is being checked to see if it exists. Then use that as id in Couchbase too or else generate a random id and assign it to the document.

import org.apache.spark.sql.functions._
val uuidUDF = udf(CouchbaseHelper.getUUID _)
val rddToBeWritten = if (cassandraRDD.columns.contains(idFeild)) {
    cassandraRDD.withColumn("META_ID", cassandraRDD(idFeild))
} else {
    cassandraRDD.withColumn("META_ID", uuidUDF())
}


In a different file:

object CouchbaseHelper {
    def getUUID: String = UUID.randomUUID().toString
}


Writing to Couchbase:

rddToBeWritten.write.couchbase()


You can run this code directly to transfer data from Cassandra to Couchbase – all you need to do is some configuration.

Configurations

All the configurations can be done by setting the environment variables.

Couchbase configuration:

Configuration Name

Default Value

Description

COUCHBASE_URL

"localhost"

The hostname for the Couchbase.

COUCHBASE_BUCKETNAME

"foobar"

The bucket name to which data needs to be transferred.


Cassandra configuration:

Configuration Name

Default Value

Description

CASSANDRA_URL

"localhost"

The hostname for the Cassandra.

CASSANDRA_PORT

9042

The port for the Cassandra.

CASSANDRA_KEYSPACENAME

"foobar"

The keyspace name for the Cassandra

CASSANDRA_TABLENAME

"testcouchbase"

The table name that needs to be transferred.

CASSANDRA_ID_FEILD_NAME

"id"

The field name that should be used as Couchbase document id, if the field name does not match any column it gives a random id to the document.

Code in Action

This is how data looks on the Cassandra side.

Cassandra1.png


As for the Couchbase side, there are two cases.

Case 1: When the id exists and the same can be used as Couchbase document id.

Couchbase_with_id.png

 

Case 2: When the id name does not exist and we need to assign a random id to documents.Couchbase_idChanged.png

How to Run the Transfer plugin

Steps to run the code:

  1. Download the code from the repository.
  2. Configure the environment variables according to the configuration.
  3. Run the project using sbt run.

Learn how the world’s first NoSQL Engagement Database delivers unparalleled performance at any scale for customer experience innovation that never ends.

Topics:
database ,couchbase ,cassandra ,data transfer ,tutorial ,apache spark

Published at DZone with permission of Laura Czajkowski, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}