Get Started With Spark 1.6 Right Away
Here's a short reference to show you where to go and what resources to use setting up the newly released Apache Spark 1.6
Join the DZone community and get the full member experience.
Join For FreeLet's start with step one, after you install, do the quick start. Play around with the Scala shell, try some of the exercises, make sure you see what's going on. Read the original research papers so you get a good idea of the how and why of Spark. Resilient Distributed Datasets (RDDs) is the main abstraction in Spark. Other things build on them, but you need to be comfortable with them. They are stored in memory without replication and live on between queries. They can rebuild any lost data using the lineage of what transformations it applied from the source datasets. This is really like a transaction log and sounds familiar to Kafka fans.
I would recommend using Apache Spark 1.6 with Scala 2.10. If you already have a Hadoop distribution that has Spark, it's easiest to use that version. Though it could be 1.5 or even 1.4 based.
Follow along with the latest documentation. Write a small script and submit the application.
Opinions expressed by DZone contributors are their own.
Comments