It was great talking with Chad Carson (Co-Founder) and Ed Colonna (V.P. of Marketing and Business Development) at Pepperdata to learn how they are implementing a DevOps methodology for Big Data. This is a topic that I was interested in learning more about since I recently concluded a series of interviews for DZone's Cloud, Big Data, and IoT research guides.
Chad says that from his personal historical background, he sees Hadoop being used mainly for web search and Machine Learning for ad queries. When it comes to Pepperdata users, production involvement in Big Data initiatives is huge with 50 million Hadoop and Spark jobs running every year with the number growing as the amount of data grows.
The heavy users are in FinTech/insurance, healthcare, technology companies, and AdTech.
Pepperdata deploys in production with big data clusters to see what workloads are like and to:
Identify performance problems.
Automate performance improvement.
Identify and resolve resource contention issues.
Provide detailed insight on hardware usage for chargeback and capacity reporting.
Provide feedback that it is OK to deploy.
Developers are getting the performance data that operations used to get so that they can iterate more quickly. Performance recommendations are based on heuristics based on what happens in the real world. There are new and improved heuristics in Dr. Elephant — part of the open-source community.
Pepperdata provides a cluster-wide view so that developers can see how individual apps are performing in the cluster as a whole. Users can move cleanly between the developers' view and operations' view so that they can see the performance of a particular app or piece of hardware earlier in the process.
Performance is critical to the success of Big Data applications so that developers can design with performance in mind. By getting real-time feedback, developers are able to see where their apps may be having issues and are able to address the problem quickly.