Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Get Started With Spark 1.6 Right Away

DZone's Guide to

Get Started With Spark 1.6 Right Away

Here's a short reference to show you where to go and what resources to use setting up the newly released Apache Spark 1.6

· Big Data Zone
Free Resource

Learn best practices according to DataOps. Download the free O'Reilly eBook on building a modern Big Data platform.

Let's start with step one, after you install, do the quick start.   Play around with the Scala shell, try some of the exercises, make sure you see what's going on.    Read the original research papers so you get a good idea of the how and why of Spark.  Resilient Distributed Datasets (RDDs) is the main abstraction in Spark.   Other things build on them, but you need to be comfortable with them.   They are stored in memory without replication and live on between queries.   They can rebuild any lost data using the lineage of what transformations it applied from the source datasets.  This is really like a transaction log and sounds familiar to Kafka fans.

I would recommend using Apache Spark 1.6 with Scala 2.10.   If you already have a Hadoop distribution that has Spark, it's easiest to use that version.   Though it could be 1.5 or even 1.4 based.

Follow along with the latest documentation.   Write a small script and submit the application.







Find the perfect platform for a scalable self-service model to manage Big Data workloads in the Cloud. Download the free O'Reilly eBook to learn more.

Topics:
apache spark ,kafka ,rdd ,hadoop ,big data

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}