Monitoring Apache Spark: Why Is It Challenging?
Monitoring Apache Spark: Why Is It Challenging?
Using Apache Spark? If it's a part of your infrastructure, you'd better have instrumented it. That's tougher than it sounds, though — read on to find out why.
Join the DZone community and get the full member experience.Join For Free
Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.
Large-scale data processing has moved into a new age of sophistication and the greatest evidence is the increasing requirement for processing data in real time. The drivers for this move are many, but the basic impetus can be understood by simply recalling the classic Time-Value-of-Data concept and how the value of data is at its peak within the first few minutes of creation, then rapidly tapers off over time. Real-time data processing is an essential capability in more and more use cases. Fortunately, there are numerous real-time data processing frameworks available, such as Apache Spark, Storm, Heron, Flink, Apex, Kafka Streams, and others. But while each framework provides a new set of important possibilities, it also adds substantial complexities and operational challenges.
In this post, we will discuss the processing of streaming data using one of the most popular data processing technologies, Apache Spark. This discussion will go in-depth into how you should think about monitoring your Spark data pipelines. But first, let’s step back and look at the basics.
Spark is a general purpose engine for fast, large-scale data processing, that is seeing wide adoption in the batch and streaming worlds. Before understanding the best ways to monitor Spark when running applications, it is important to understand the execution model and the various components in the Spark architecture.
Spark Manager / Master
Manages cluster resources. It can be run in one of the following modes:
- Standalone: A cluster manager included with Spark for easy setup
- Mesos: Abstracts system resources and makes them available as an elastic distributed system
- Yarn: The default resource manager starting Version 2 of Hadoop
In standalone mode, the workers are processes running on individual nodes that manage resource allocation requests for that node and also monitor the executors.
An executor is a process that is launched alongside the worker for every application that is created in the cluster. It does the actual work of running tasks inside of it.
A process that creates the SparkContext and drives the overall application from start to finish.
The Spark Driver creates a Spark Context and communicates with the Manager/Master to acquire resources on the worker nodes. The workers then create executor processes locally for every spark application. Once this is complete, the driver directly communicates with the Executors to get the work done. The workers are constantly monitoring the Executors for failures. A DAGScheduler in the driver internally breaks down the computation plan into stages and tasks which finally gets executed on the executors. The executors for an application are alive as long as the driver is alive.
As you can see, there are many components coming together to make a spark application work. If you’re planning to deploy Spark in your production environment, you want to make sure you can monitor the different components, understand performance parameters, get alerted when things go wrong and know how to troubleshoot issues. To achieve the level of confidence and maturity so you can actively deploy and run in your production environment, you need to be thinking more than just collecting and graphing various metrics. This blog and the next one in this series will attempt at describing the challenges and best practices to monitor.
Why Is Monitoring Spark Streaming Challenging?
The question seems simple but is probably the most difficult question you need to answer. The Spark UI provides a basic dashboard but this is not sufficient if you’re thinking of moving to a production ready setup. Without any insight into Spark’s inner workings and its components, you’re flying blind. So, what should your monitoring setup look like?
Let’s break down the problem into more understandable chunks. You should think of the problem as monitoring at three levels:
- The spark infrastructure consisting of
- Applications that run on the Spark infrastructure
- Underlying hosts – Disks, CPU, Network
Once you have broken down the monitoring problem into the three levels shown above you will also see how these are interdependent. Anything that affects a host, say a disk failure, will propagate the failure through the spark infrastructure and finally to the application. Setting up a correlated view of health across the 3 levels is critical. You need a monitoring system that can understand these interdependencies and help you visually correlate the problem on the application to an issue on the infrastructure and finally to an issue on the host. Without this sophisticated correlation, you’ll be spending hours trying to debug the root cause of an issue when things go awry.
OpsClarity’s concept of health of a service is a good example of how this can be achieved. OpsClarity can automatically discover the entire service topology (see the image below) of your data pipeline and applications. Aided with this visual, logical and real-time topology, OpsClarity understands how to correlate concerns across the 3 levels (vertical correlation), as well as correlate across services and the data pipeline (horizontal correlation). A failure at one of these levels automatically propagates through the other layers. In the screenshot below, the red, green, orange circles around services correspond to the health of that service. Starting with a service, it allows you to see components that make up the service (For ex. Driver, Executor, Master, Worker for Spark), easily dig into the hosts running the services and understand their health. This visual representation drastically simplifies troubleshooting since correlation of problems becomes much easier.
How Should I Configure My Monitoring?
Spark provides metrics for each of the above components through different endpoints. For example, if you want to look at the Spark driver details, you need to know the exact URL, which keeps changing over time – Spark keeps you guessing on the URL. The typical problem is when you start your driver in cluster mode. How do you detect on which worker node the driver was started? Once there, how do you identify the port on which the Spark driver exposes its UI? This seems to be a common annoying issue for most developers and DevOps professionals who are managing Spark clusters. In fact, most end up running their driver in client mode as a workaround, so they have a fixed URL endpoint to look at. However, this is being done at the cost of losing failover protection for the driver. Your monitoring solution should be automatically able to figure out where the driver for your application is running, find out the port for the application and automatically configure itself to start collecting metrics.
For a dynamic infrastructure like Spark, your cluster can get resized on the fly. You must ensure your newly spawned components (Workers, executors) are automatically configured for monitoring. There is no room for manual intervention here. You shouldn’t miss out monitoring newer processes that show up on the cluster. On the other hand, you shouldn’t be generating false alerts when executors get moved around. A general monitoring solution will typically start alerting you if an executor gets killed and starts up on a new worker – this is because generic monitoring solutions just monitor your port to check if it’s up or down. With a real-time streaming system like Spark, the core idea is that things can move around all the time.
Consider a check pointed Spark application. In case of a driver failure, the application can come back with the same application context alongside another worker. In this case, the metric collection, port checks and other monitors that were configured should propagate to the new driver node and keep the configuration intact. Would your monitoring solution be able to handle such scenarios? Simply collecting metrics and displaying them on dashboards will not work for data-centric systems like Spark.
OpsClarity’s auto topology discovery and auto-configuration feature detects and configures Spark’s dynamically changing infrastructure on the fly. When new workers get added to the Spark cluster, they are automatically configured just like the other workers. Same with the executors as well – new executors automatically get configured for monitoring. No manual intervention required. Since OpsClarity auto-discovers topology, it also knows where the driver application is running and auto-configures itself to start collecting metrics. You don’t have to deal with false alerts when executors get moved around since OpsClarity will only alert you when executor deaths result in performance degradation.
You need to be 100% confident of your monitoring setup and understand how your monitoring would behave with such a dynamic infrastructure. Make sure you think through each of the cases stated above for monitoring. Else, when hell breaks loose, you lose.
In the next part of Monitoring Apache Spark series, we will take a look at the key Spark metrics, their behavior patterns and how to effectively troubleshoot a complex and distributed Spark Streaming application.
Published at DZone with permission of Swaroop Ramachandra , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.