Run and Scale an Apache Spark Application on Kubernetes
Learn how to set up Apache Spark on IBM Cloud Kubernetes Service by pushing the Spark container images to IBM Cloud Container Registry....
Join the DZone community and get the full member experience.Join For Free
Learn how to set up Apache Spark on IBM Cloud Kubernetes Service by pushing the Spark container images to IBM Cloud Container Registry.
Let's begin by looking at the technologies involved.
What is Apache Spark?
Apache Spark (Spark) is an open source data-processing engine for large data sets. It is designed to deliver the computational speed, scalability and programmability required for Big Data - specifically for streaming data, graph data, machine learning and artificial intelligence (AI) applications.
Spark's analytics engine processes data 10 to 100 times faster than alternatives. It scales by distributing processing work across large clusters of computers, with built-in parallelism and fault tolerance. It even includes APIs for programming languages that are popular among data analysts and data scientists, including Scala, Java, Python and R.
Quick intro to Kubernetes and the IBM Cloud Kubernetes Service
Kubernetes is an open source platform for managing containerized workloads and services across multiple hosts. It offers management tools for deploying, automating, monitoring and scaling containerized apps with minimal-to-no manual intervention.
IBM Cloud Kubernetes Service is a managed offering to create your own Kubernetes cluster of compute hosts to deploy and manage containerized apps on IBM Cloud. As a certified Kubernetes provider, IBM Cloud Kubernetes Service provides intelligent scheduling, self-healing, horizontal scaling, service discovery and load balancing, automated rollouts and rollbacks, and secret and configuration management for your apps.
How Apache Spark works on Kubernetes
To understand how Spark works on Kubernetes, refer to the Spark documentation. The following occurs when you run your Python application on Spark:
- Apache Spark creates a driver pod with the requested CPU and Memory.
- The driver then creates executor pods that connect to the driver and execute application code.
- While the application is running, the executor pods are terminated and new pods are created based on the load. Once the application completes, all the executor pods are terminated and the logs are persisted in the driver pod that remains in the completed state:
- A runnable distribution of Spark 2.3 or above. Recommended: Spark 3.1.1 (Download)
- A running Kubernetes cluster with access configured to it using kubectl. Check the IBM Cloud Kubernetes Service documentation to create a cluster. For autoscaling, set the Worker nodes per zone to one.
- Install and setup an IBM Cloud Container Registry CLI and namespace
- IBM Cloud Kubernetes CLI
- Docker engine
In short, you need three things to complete this journey:
- An standard IBM Cloud Kubernetes Service cluster
- An unzipped Spark distribution
- An IBM Cloud Container Registry with a namespace setup
Configure the IBM Cloud Kubernetes Service cluster
In this section, you will access the IBM Cloud Kubernetes Service cluster and will create a custom serviceaccount and a clusterrolebinding.
- To access your standard IBM Cloud Kubernetes Service cluster, refer to the Access section of your cluster. Following the steps, you should be able to download and add the
kubeconfigconfiguration file for your cluster to your existing
~/.kube/configor the last file in the
- Run the below command to create the serviceaccount. To understand why we require RBAC, refer to the RBAC on Spark Documentation:
- To grant a service account a
ClusterRole, you need a
ClusterRoleBinding. To create a
ClusterRoleBinding, you can use the
kubectl create rolebinding(or
ClusterRoleBinding) command. For example, the following command creates an
edit ClusterRolein the
defaultnamespace and grants it to the
sparkservice account created above:Shell
Patch the spark serviceaccount to use the default all-icr-io secret to pull the images from the IBM Cloud container registry:Shell
Push the Spark container images to a private container registry
Let's start by pushing our Spark container images to our private registry on IBM Cloud.
- In the terminal on your machine, move to the unzipped Spark folder.
- Run the
ibmcloud cr logincommand to log your local Docker daemon into IBM Cloud Container Registry:Shell
- Set an environment variable to store you container registry namespace:
- To get your container registry based on the region you logged in, run the below command to export it as an environment variable:
- Build the container image:
- Push the container image:
Run the Spark application
There are two ways to run a Spark application on IBM Cloud Kubernetes Service:
- Using spark-submit.
- Using spark-on-k8s operator.
The spark-submit way
- Before running spark-submit, export the K8S_MASTER_URL:
- spark-submitcan be directly used to submit a Spark application to a Kubernetes cluster. You can do that with the following command:
- Get the driver pod name by running the below command:
- The UI associated with any application can be accessed locally using kubectl port-forward:
The spark-on-k8s operator way
Spark-on-k8s operator is a Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
- To install, you need Helm. Follow the instructions mentioned in the GitHub repo to install the operator on your IBM Cloud Kubernetes Service cluster.
- Create a YAML file -
spark-deploy.yaml- with the content below. Replace the placeholders <CONTAINER_REGISTRY> and <CONTAINER_NAMESPACE> and save the file:YAML
- You'll see that the Spark application is scheduled to run every 10 minutes, calculating the value is Pi.
- Apply the spark-deploy.yaml to install the operator:
- Run the
kubectl get pods --watchcommand to check the number of executor pods running.
Note: To check the Kubernetes resources, logs etc., I would recommend IBM-kui, a hybrid command-line/UI development experience for cloud native development.
The autoscaling of the pods and IBM Cloud Kubernetes Service cluster depends on the requests and limits you set on the Spark driver and executor pods.
cluster-autoscaler add-on, you can scale the worker pools in your IBM Cloud Kubernetes Service classic or VPC cluster automatically to increase or decrease the number of worker nodes in the worker pool based on the sizing needs of your scheduled workloads. The
cluster-autoscaler add-on is based on the Kubernetes Cluster-Autoscaler project.
For scaling apps, check out the IBM Cloud documentation.
Install the cluster autoscaler add-on to your cluster from the console:
- From the IBM Cloud Kubernetes Service cluster dashboard, select the cluster where you want to enable autoscaling.
- On the Overview page, click Add-ons.
- On the Add-ons page, locate the Cluster Autoscaler add-on and click Install. You can also do the same using the CLI.
After enabling the add-on, you need to edit the ConfigMap. For step-by-step instructions, refer to the autoscaling cluster documentation.
When a Spark application is submitted, resources are requested based on the requests you set on the driver and executor. With dynamic resource allocation, Spark allocates additional resources required to complete the tasks in a job. The resources are automatically released once the load comes down or the tasks are completed.
For dynamic allocation to work, your application must set two configuration settings
true. To do this, follow these steps:
- Move to the unzipped Spark folder.
- Under the conffolder, rename spark-defaults.conf.template to spark-defaults.conf and add the below environment settings:
spark.dynamicAllocation.enabled true spark.dynamicAllocation.shuffleTracking.enabled true
- Save the configuration file.
- You may need to build and push the container image to reflect the config changes. As the
imagePullPolicyis set to
Always, the new container image will be pulled automatically. Remember to delete the existing driver and executor pods with the
kubectl delete pods --allcommand.
You can add Apache Spark Streaming for PySpark applications like wordcount to read and write batches of data to Cloud services like IBM Cloud Object Storage (COS). Also, check Stocator for connecting COS to Apache Spark.
Published at DZone with permission of Vidyasagar Machupalli, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.