Execute Spark Applications With Apache Livy
To run a Spark application like a batch job, we have to provide the path to the application entry point, along with the parameters, all the while using the REST API.
Join the DZone community and get the full member experience.Join For Free
These days, Spark — either as Apache Spark or as Databricks (on any one of the cloud platforms) — has become the de-facto tool for data processing. Many of us are comfortable developing Spark applications using Scala/Spark or Python/Spark. While application development may seem simple, testing the application is fairly simple for all of us. As we tend to develop the application either on our laptops or on VMs, we usually have a local Spark setup that we invoke.
When Spark applications are written as part of a larger data pipeline, one of the most common mechanisms is to run the application using a '
spark submit' command. In many cases, one or more Spark applications are executed one after the other where the '
spark submit' command is executed by a scheduler like Oozie or cron or Airflow.
While the option of '
spark submit' is fairly straightforward and simple, matters are a bit complicated if we have to execute Spark jobs from other applications. Recently, I was faced with a similar situation, where I had to execute a couple of Spark jobs from a micro-service. This situation presented a problem. When we perform a '
spark submit,' the system expects Spark and associated libraries to be present in the same environment, along with the required environment variables. As the microservices were going to be deployed in Docker and Kubernetes, we had a situation.
Enter Apache Livy
Fortunately for me, one of my colleagues suggested I look at the Apache Livy project. At the time of writing, the Apache Livy project is still an incubating project and is at version 0.7. The Apache Livy project runs as a server on a port and allows us to interact with Spark applications via a REST API. Using the REST API, the execution of Spark jobs became very simple. Using Apache Livy, we have to ensure that the Spark application and the Livy server are on the same VM. To run a Spark application like a batch job, we have to provide the path to the entry point of the application, along with the parameters, all the while using the REST API. Once a batch is submitted, Livy allows us to monitor the status of the job using another REST endpoint.
In this article, I am presenting a helper class that makes it easier to interact with 'Livy fronted Spark applications' as I have started calling them.
Here is the helper class to interact with Livy and 'Livy fronted Spark applications'.
Using the Helper Class
After defining the class, we can run Spark jobs as below:
Apache Livy makes the task of invoking an monitoring remote Spark jobs quite easy.
In a following article, I will share a similar helper for Databricks jobs.
- Apache Livy — https://livy.incubator.apache.org/
- Apache Livy Getting Started — https://livy.incubator.apache.org/get-started/
- Apache Livy Examples — https://livy.incubator.apache.org/examples/
- Apache Livy REST API — https://livy.incubator.apache.org/docs/latest/rest-api.html
Opinions expressed by DZone contributors are their own.