DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • An Approach to Process Skewed Dataset in High Volume Distributed Data Processing
  • Cutting Big Data Costs: Effective Data Processing With Apache Spark
  • Building Analytics Architectures to Power Real-Time Applications
  • Transforming Data Into JSON Structure With Spark: API-Based and SQL-Based Approaches

Trending

  • Cognitive AI: The Road To AI That Thinks Like a Human Being
  • The Winds of Change: How Generative AI is Revolutionizing Cybersecurity
  • Choreography Pattern: Optimizing Communication in Distributed Systems
  • Best Practices for Developing Cloud Applications
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Tools for Troubleshooting, Installation and Setup of Apache Spark Environments

Tools for Troubleshooting, Installation and Setup of Apache Spark Environments

DZone Zone Leader Tim Spann runs through a checklist for setting up Big Data applications with Apache Spark.

Tim Spann user avatar by
Tim Spann
CORE ·
Dec. 01, 15 · Tutorial
Like (5)
Save
Tweet
Share
4.29K Views

Join the DZone community and get the full member experience.

Join For Free

Let's run through some tools for installing, setting up, and troubleshooting a Big Data environment in Apache Spark. 

First, validate that you have connectivity and no firewall issues when you are starting.  Conn Check is an awesome tool for that.

If you need to setup a number of servers at once, check out Sup.

First get version 1.8 of the JDK.  Apache Spark works best with Scala, Java, and Python.  Get the version of Scala you may need. Scala Version 2.10 is the standard version and used for the precompiled downloads. You can use Scala 2.11, but you will need to build the package yourself.   You will need Apache Maven if you want to build yourself. Install Python 2.6 for PySpark. Also download SBT for Scala.

Once everything is installed, a very cool tool to work with Apache Spark is the new Apache Zeppelin.   Very cool for data exploration and data science experiments, give it a try.

An Example SBT for building a Spark Job:

name := "Postgresql Project"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.1"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.5.1"
libraryDependencies += "org.postgresql" % "postgresql" % "9.4-1204-jdbc42"
libraryDependencies += "org.mongodb" % "mongo-java-driver" % "3.1.0"
libraryDependencies += "com.stratio.datasource" % "spark-mongodb_2.10" % "0.10.0"


An example of running a Spark Scala Job:

sudo /deploy/spark-1.5.1-bin-hadoop2.6/bin/spark-submit --packages com.stratio:spark-mongodb-core:0.8.7  --master spark://10.13.196.41:7077 --class "PGApp" --driver-class-path /deploy/postgresql-9.4-1204.jdbc42.jar  target/scala-2.10/postgresql-project_2.10-1.0.jar  --driver-memory 1G


Items to add to your Spark toolbox:

  • Security
    http://mig.mozilla.org/

  • Machine Learning
    http://systemml.apache.org/

  • OCR
    https://github.com/tesseract-ocr/tesseract

Apache Spark

Published at DZone with permission of Tim Spann, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • An Approach to Process Skewed Dataset in High Volume Distributed Data Processing
  • Cutting Big Data Costs: Effective Data Processing With Apache Spark
  • Building Analytics Architectures to Power Real-Time Applications
  • Transforming Data Into JSON Structure With Spark: API-Based and SQL-Based Approaches

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: