Execute Spark Applications on Databricks Using the REST API
Let's get your Spark apps up and running on Databricks.
Join the DZone community and get the full member experience.Join For Free
While many of us are habituated to executing Spark applications using the 'spark-submit' command, with the popularity of Databricks, this seemingly easy activity is getting relegated to the background. Databricks has made it very easy to provision Spark-enabled VMs on the two most popular cloud platforms, namely AWS and Azure. A couple of weeks ago, Databricks announced their availability on GCP as well. The beauty of the Databricks platform is that they have made it very easy to become a part of their platform. While Spark application development will continue to have its challenges - depending on the problem being addressed - the Databricks platform has taken out the pain of having to establish and manage your own Spark cluster.
Once registered on the platform, the Databricks platform allows us to define a cluster of one or more VMs, with configurable RAM and executor specifications. We can also define a cluster that can launch a minimum number of VMs at startup and then scale to a maximum number of VMs as required. After defining the cluster, we have to define jobs and notebooks. Notebooks contain the actual code executed on the cluster. We need to assign notebooks to jobs as the Databricks cluster executes jobs (and not Notebooks). Databricks also allows us to setup the cluster such that it can download additional JARs and/or Python packages during cluster startup. We can also upload and install our own packages (I used a Python wheel).
Recently, I developed some functionality - data reconciliation, data validation and data profiling - using Spark. Initially, we developed the functionality using the local Spark installation and things were fine. While I knew that the design would need to be reworked, we went ahead with the local implementation. Why the design change? The functionality we developed was fronted by a microservice - one for each. As the microservices were going to be deployed using Docker and Kubernetes, we would need to implement a design change for the simple reason that we could not deploy the Spark application on the Docker and Kubernetes setup. We needed to have the Spark application running on a dedicated Spark instance.
To make this happen, we had two options - Apache Livy and Databricks. For implementation flexibility and also to cater to customer infrastructure, we decided to implement both options. In an earlier article (Execute Spark Applications With Apache Livy), I have mentioned how we can execute Spark applications using Apache Livy's REST interface. I have covered the Apache Livy implementation in an earlier article.
Using Databricks Remotely
Similar to what Apache Livy has, Databricks also provides a REST API. As our implementation was in Python, we used the package
databricks_api. While the REST API makes it simple to invoke a Spark application available on a Databricks cluster, I realized that all the three services ended up with the same code - the mechanism for setting up and invoking the Databricks API was the same - the names of the jobs and the parameters passed during invocation were different. Hence I wrapped up the common functionality into a helper class.
Here is the helper class to interact with Spark applications hosted on Databricks.
Using the Helper Class
After defining the class, we can run Spark jobs as below
The Databricks API makes it easy to interact with Databricks jobs remotely. Not only can we run jobs on the Databricks cluster, but we can also monitor their execution state.
Opinions expressed by DZone contributors are their own.