Over a million developers have joined DZone.

Get Started With Spark 1.6 Right Away

Here's a short reference to show you where to go and what resources to use setting up the newly released Apache Spark 1.6

· Big Data Zone

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

Let's start with step one, after you install, do the quick start.   Play around with the Scala shell, try some of the exercises, make sure you see what's going on.    Read the original research papers so you get a good idea of the how and why of Spark.  Resilient Distributed Datasets (RDDs) is the main abstraction in Spark.   Other things build on them, but you need to be comfortable with them.   They are stored in memory without replication and live on between queries.   They can rebuild any lost data using the lineage of what transformations it applied from the source datasets.  This is really like a transaction log and sounds familiar to Kafka fans.

I would recommend using Apache Spark 1.6 with Scala 2.10.   If you already have a Hadoop distribution that has Spark, it's easiest to use that version.   Though it could be 1.5 or even 1.4 based.

Follow along with the latest documentation.   Write a small script and submit the application.







Hortonworks DataFlow is an integrated platform that makes data ingestion fast, easy, and secure. Download the white paper now.  Brought to you in partnership with Hortonworks

Topics:
apache spark ,kafka ,rdd ,hadoop ,big data

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}