DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Data Engineering
  3. Data
  4. Get Started With Spark 1.6 Right Away

Get Started With Spark 1.6 Right Away

Here's a short reference to show you where to go and what resources to use setting up the newly released Apache Spark 1.6

Tim Spann user avatar by
Tim Spann
CORE ·
Jan. 08, 16 · Tutorial
Like (4)
Save
Tweet
Share
4.31K Views

Join the DZone community and get the full member experience.

Join For Free

Let's start with step one, after you install, do the quick start.   Play around with the Scala shell, try some of the exercises, make sure you see what's going on.    Read the original research papers so you get a good idea of the how and why of Spark.  Resilient Distributed Datasets (RDDs) is the main abstraction in Spark.   Other things build on them, but you need to be comfortable with them.   They are stored in memory without replication and live on between queries.   They can rebuild any lost data using the lineage of what transformations it applied from the source datasets.  This is really like a transaction log and sounds familiar to Kafka fans.

I would recommend using Apache Spark 1.6 with Scala 2.10.   If you already have a Hadoop distribution that has Spark, it's easiest to use that version.   Though it could be 1.5 or even 1.4 based.

Follow along with the latest documentation.   Write a small script and submit the application.







Database Transaction log Apache Spark Scala (programming language) application Replication (computing) Memory (storage engine) Data (computing) Abstraction (computer science) Documentation

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • HTTP vs Messaging for Microservices Communications
  • Spring Boot, Quarkus, or Micronaut?
  • The Path From APIs to Containers
  • Fargate vs. Lambda: The Battle of the Future

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: