How to Execute Spark Code on Spark Shell With Cassandra
Knowing how to execute Spark code a Spark shell using Cassandra is incredibly helpful when you are unable to use an IDE.
Join the DZone community and get the full member experience.
Join For FreeIn this blog, we will see how to execute our Spark code on Spark shell using Cassandra. This is very efficient when it comes to testing and learning and when we have to execute our code on a Spark shell rather than doing so on an IDE.
Here, we will use Spark v1.6.2.You can download the version from here and its appropriate spark Cassandra connector, spark-cassandra-connector_2.10-1.6.2.jar
, can be downloaded from here.
So, let's begin with an example.
Create any test table in your Cassandra (I am using Cassandra v3.0.10).
CREATE TABLE test_smack.movies_by_actor(
actor text,
release_year int,
movie_id uuid,
genres set,
rating float,
title text,
PRIMARY KEY(actor, release_year, movie_id)
) WITH CLUSTERING ORDER BY(release_year DESC, movie_id ASC)
Insert test data:
INSERT INTO movies_by_actor(actor, release_year, movie_id, genres, rating, title) VALUES(‘Johnny Depp’, 2010, now(), {‘
Drama’,
‘Thriller’
}, 7.5, ’The Tourist’);
INSERT INTO movies_by_actor(actor, release_year, movie_id, genres, rating, title) VALUES(‘Johnny Depp’, 2011, now(), {‘
Animated’,
‘Comedy’
}, 8.5, ’Rango’);
INSERT INTO movies_by_actor(actor, release_year, movie_id, genres, rating, title) VALUES(‘Johnny Depp’, 2012, now(), {‘
Crime’,
‘Dark Comedy’
}, 6.5, ’Dark Shadows’);
INSERT INTO movies_by_actor(actor, release_year, movie_id, genres, rating, title) VALUES(‘Johnny Depp’, 2013, now(), {‘
Adventurous’,
‘Thriller’
}, 9.5, ’Transcendence’);
INSERT INTO movies_by_actor(actor, release_year, movie_id, genres, rating, title) VALUES(‘Johnny Depp’, 2013, now(), {‘
Adventurous’,
‘Thriller’
}, 6.5, ’The Lone Ranger‘);
INSERT INTO movies_by_actor(actor, release_year, movie_id, genres, title) VALUES(‘Johnny Depp’, 2014, now(), {‘
thriller’
}, ’Black Mass’);
Go to the path where you have kept you spark binary folder (i.e., Desktop/spark-1.6.2-bin-hadoop2.6/bin
) and start Spark by including the JAR file we downloaded above.
$ sudo./spark-shell –jars /PATH_TO_YOUR_CASSANDRA_CONNECTOR/spark-cassandra-connector_2.10-1.6.2.jar
When you starts Spark using Spark shell, Spark by default creates a spark context named sc
.
Now, we need to do the following steps to connect our spark cluster with Cassandra:
sc.stop
import com.datastax.spark.connector._
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
val conf = new SparkConf(true).set(“spark.cassandra.connection.host”, “localhost”)
// Here localhost is the address where your spark is running
val sc = new SparkContext(conf)
It's all done! Now, you can query your database and play with your results, like we are calculating the number of Johnny Depp
movies for each year:
sc.cassandraTable(“keyspaeceName”,”movies_by_actor”).select(“release_year”).as((year:Int) => (year,1)).reduceByKey(_ + _).collect.foreach(println)
Output:
(2010,1)
(2012,1)
(2013,2)
(2011,1)
Published at DZone with permission of Piyush Rana, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments