Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Apache Spark for Big Data Processing

DZone's Guide to

Apache Spark for Big Data Processing

A video from SpringOne2GX 2015 on using Spark with Spring XD, as well as several concrete examples of analyzing data with Spark.

· Big Data Zone ·
Free Resource

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

Recorded at SpringOne2GX 2015

Presenters: Ludwine Probst and Ilayaperumal Gopinathan

Big Data Track

Slides: http://www.slideshare.net/SpringCentral/apache-spark-for-big-data-processing

Today, we live in the world of Big Data. Hadoop and MapReduce are highly dominant in the domain of large scale data processing. However, the MapReduce model shows its limits for various types of treatment, especially for highly iterative algorithms frequently encountered in the field of Machine Learning.

Spark is an in-memory data processing framework that, unlike Hadoop, provides interactive and real-time analysis on large datasets. Furthermore, Spark has a more flexible programming model and gives better performance than Hadoop.

In this talk, we aim at giving a portrait of Spark and at browsing its ecosystem, in particular Spark Streaming and MLlib with a concrete example. We will also show how you can use Spark with Spring XD, allowing you to take advantage of the strengths in each platform.

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

Topics:
java ,spring ,spark

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}