Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Code Analyzer for Apache Spark

DZone's Guide to

Code Analyzer for Apache Spark

The new Code Analyze for Apache Spark promotes DevOps for Big Data, helping to tear down the wall between developers and operations.

· Big Data Zone ·
Free Resource

Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

It was great talking to Chad Carson, co-founder of Pepperdata, about their release of Code Analyzer as another step in their mission to promote DevOps for big data.

The macro trends that have been taking place in Big Data over the past year include:

  • Moved from experimentation to production.

  • Hadoop MapReduce dominates production; Apache Spark gaining fast.

  • Most companies are migrating to cloud or hybrid.

  • Production Big Data is adopting standard DevOps practices.

Movement from R to Spark is consistent with what I've heard in my most recent Big Data interviews. Based on Databricks' Apache Spark survey, people are moving to Spark for the following reasons:

  • Performance (91%).

  • Advanced analytics (82%).

  •  Ease of programming (76%).

  • Ease of deployment (69%).

  • Real-time streaming (51%).

Spark is easier for non-data engineers to use. Spark is not without problems, however. Customer pain points include:

  • Hidden execution details (hard to know why performance is slow).

  • Developers can’t connect code to hardware usage.

  • Achieving acceptable performance in production is complex (for example, the “cluster weather” problem).

  • Run times are inconsistent.

  • Hardware is underused with no vision to see how to improve.

Code Analyzer addresses these problems for:

  • Developers (data engineers), who make up 41% of users.

    • Precisely correlates resource utilization with application code.

    • Provides a contextual understanding of overall cluster resource consumption.

    • Gives users the ability to compare multiple runs of the same application.

  • Operators

    • Helps developers self-solve performance problems.

    • Identifies problem apps and sends a link to the developer with details.

This helps break down the wall between development and operations so they're working together to solve problems, improve code quality, and accelerate code delivery. 

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub.  Join the discussion.

Topics:
big data ,data analytics ,apache spark ,code analyzer

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}