Execute Spark Applications With Apache Livy
To run a Spark application like a batch job, we have to provide the path to the application entry point, along with the parameters, all the while using the REST API.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
These days, Spark — either as Apache Spark or as Databricks (on any one of the cloud platforms) — has become the de-facto tool for data processing. Many of us are comfortable developing Spark applications using Scala/Spark or Python/Spark. While application development may seem simple, testing the application is fairly simple for all of us. As we tend to develop the application either on our laptops or on VMs, we usually have a local Spark setup that we invoke.
When Spark applications are written as part of a larger data pipeline, one of the most common mechanisms is to run the application using a 'spark submit
' command. In many cases, one or more Spark applications are executed one after the other where the 'spark submit
' command is executed by a scheduler like Oozie or cron or Airflow.
While the option of 'spark submit
' is fairly straightforward and simple, matters are a bit complicated if we have to execute Spark jobs from other applications. Recently, I was faced with a similar situation, where I had to execute a couple of Spark jobs from a micro-service. This situation presented a problem. When we perform a 'spark submit
,' the system expects Spark and associated libraries to be present in the same environment, along with the required environment variables. As the microservices were going to be deployed in Docker and Kubernetes, we had a situation.
Enter Apache Livy
Fortunately for me, one of my colleagues suggested I look at the Apache Livy project. At the time of writing, the Apache Livy project is still an incubating project and is at version 0.7. The Apache Livy project runs as a server on a port and allows us to interact with Spark applications via a REST API. Using the REST API, the execution of Spark jobs became very simple. Using Apache Livy, we have to ensure that the Spark application and the Livy server are on the same VM. To run a Spark application like a batch job, we have to provide the path to the entry point of the application, along with the parameters, all the while using the REST API. Once a batch is submitted, Livy allows us to monitor the status of the job using another REST endpoint.
In this article, I am presenting a helper class that makes it easier to interact with 'Livy fronted Spark applications' as I have started calling them.
Helper Class
Here is the helper class to interact with Livy and 'Livy fronted Spark applications'.
xxxxxxxxxx
import time
import requests
import requests.utils import quote
class LivyBatchRunner:
VALID_STATES = [ "success", "dead", "unknown" ]
SERVICE_NOT_AVAILABLE = "service is not available"
def __init__(self, serverBaseURL):
self.serverBaseURL = serverBaseURL
self.runID = None
def submitBatch(self, params):
runState = None
try:
if self.serverBaseURL is not None and self.serverBaseURL != "":
url = f"{self.runID}/batches"
retVal = requests.post(url, json=params).json()
if retVal.status_code == 201:
responseJSON = retVal.json()
self.runID = responseJSON.get("id", None)
runState = responseJSON.get("state", None)
else:
self.runID = -1
runState = LivyBatchRunner.SERVICE_NOT_AVAILABLE
else:
self.runID = -1
runState = LivyBatchRunner.SERVICE_NOT_AVAILABLE
except:
self.runID = -1
runState = LivyBatchRunner.SERVICE_NOT_AVAILABLE
return self.runID, runState
def getBatchState(self):
url = f"{self.serverBaseURL}/batches/{self.runID}/state"
retVal = requests.get(url).json()
if retVal.status_code == 200:
responseJSON = retVal.json()
runState = responseJSON.get("state", None)
else:
runState = "unknown"
return runState
def waitForBatch(self, sleepTime, timeoutValue):
runState = "running"
runTime = 0
while runState not in != LivyBatchRunner.VALID_STATES and runTime < timeoutValue:
time.sleep(sleepTime)
runTime = runTime + sleepTime
runState = self.getBatchState()
return runTime, runState
Using the Helper Class
After defining the class, we can run Spark jobs as below:
xxxxxxxxxx
from LivyBatchRunner import LivyBatchRunner
DEFAULT_TIME_OUT_VALUE = 600
DEFAULT_PING_TIME = 20
params = {
"name": <name of application>,
"file": <path to application entry point>,
"pyFiles": []
"args": [param_str]
}
serverURL = <url>
livy = LivyBatchRunner(serverURL)
jobID, state = livy.submitBatch(params)
if state == LivyBatchRunner.SERVICE_NOT_AVAILABLE:
print("Livy server is not available")
else:
timeTaken, state = livy.waitForBatch(DEFAULT_PING_TIME, DEFAULT_TIME_OUT_VALUE)
if timeTaken > DEFAULT_TIME_OUT_VALUE:
print("Batch is still running. please check after some time")
else:
print(f"Batch execution is done. Status is {state}")
Conclusion
Apache Livy makes the task of invoking an monitoring remote Spark jobs quite easy.
In a following article, I will share a similar helper for Databricks jobs.
References
- Apache Livy — https://livy.incubator.apache.org/
- Apache Livy Getting Started — https://livy.incubator.apache.org/get-started/
- Apache Livy Examples — https://livy.incubator.apache.org/examples/
- Apache Livy REST API — https://livy.incubator.apache.org/docs/latest/rest-api.html
Opinions expressed by DZone contributors are their own.
Comments