Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Code Analyzer for Apache Spark

DZone's Guide to

Code Analyzer for Apache Spark

The new Code Analyze for Apache Spark promotes DevOps for Big Data, helping to tear down the wall between developers and operations.

· Big Data Zone ·
Free Resource

The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.

It was great talking to Chad Carson, co-founder of Pepperdata, about their release of Code Analyzer as another step in their mission to promote DevOps for big data.

The macro trends that have been taking place in Big Data over the past year include:

  • Moved from experimentation to production.

  • Hadoop MapReduce dominates production; Apache Spark gaining fast.

  • Most companies are migrating to cloud or hybrid.

  • Production Big Data is adopting standard DevOps practices.

Movement from R to Spark is consistent with what I've heard in my most recent Big Data interviews. Based on Databricks' Apache Spark survey, people are moving to Spark for the following reasons:

  • Performance (91%).

  • Advanced analytics (82%).

  •  Ease of programming (76%).

  • Ease of deployment (69%).

  • Real-time streaming (51%).

Spark is easier for non-data engineers to use. Spark is not without problems, however. Customer pain points include:

  • Hidden execution details (hard to know why performance is slow).

  • Developers can’t connect code to hardware usage.

  • Achieving acceptable performance in production is complex (for example, the “cluster weather” problem).

  • Run times are inconsistent.

  • Hardware is underused with no vision to see how to improve.

Code Analyzer addresses these problems for:

  • Developers (data engineers), who make up 41% of users.

    • Precisely correlates resource utilization with application code.

    • Provides a contextual understanding of overall cluster resource consumption.

    • Gives users the ability to compare multiple runs of the same application.

  • Operators

    • Helps developers self-solve performance problems.

    • Identifies problem apps and sends a link to the developer with details.

This helps break down the wall between development and operations so they're working together to solve problems, improve code quality, and accelerate code delivery. 

Managing data at scale doesn’t have to be hard. Find out how the completely free, open source HPCC Systems platform makes it easier to update, easier to program, easier to integrate data, and easier to manage clusters. Download and get started today.

Topics:
big data ,data analytics ,apache spark ,code analyzer

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}