Collecting Logs in Azure Databricks
This article demonstrates how you can use Azure Databricks with Spark to create and collect logs and Docker.
Join the DZone community and get the full member experience.Join For Free
Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform. In this blog, we are going to see how we can collect logs from Azure to ALA. Before going further we need to look how to set up a Spark cluster in Azure.
Create a Spark Cluster in Databricks
- In the Azure portal, go to the Databricks workspace that you created, and then click Launch Workspace.
- You are redirected to the Azure Databricks portal. From the portal, click New Cluster.
- Under “Advanced Options,” click on the “Init Scripts” tab. Go to the last line under the “Init Scripts" section. Under the “destination” dropdown, select “DBFS," and enter “dbfs:/databricks/spark-monitoring/spark-monitoring.sh” in the text box. Click the “add” button.
Run a Spark SQL job
- In the left pane, select Azure Databricks. From the Common Tasks, select New Notebook.
- In the Create Notebook dialog box, enter a name, select language, and select the Spark cluster that you created earlier.
You may also enjoy: Executing ML Jobs in Azure Databricks From Streamsets
Create a Notebook
- Click the Workspace button
- In the Create Notebook dialog, enter a name and select the notebook’s default language
- There are running clusters, the Cluster drop-down displays. Select the cluster.
Adding Logger into DataBricks Notebook
Now that you are all set up with Notebook, let’s configure the cluster for sending logs to Azure Log Analytics workspace. For that, we will be creating a log analytic workspace in Azure.
- First, deploy the Spark monitoring library in Azure cluster.
- Clone or download this GitHub repository.
- Install the Azure Databricks CLI.
- A personal access token is required to use the CLI. For instructions, see token management.
- You can also use the CLI from the Azure Cloud Shell
Build the Azure Databricks Monitoring Library Using Docker
Configure the Azure Databricks Workspace
Copy the JAR files and init scripts to Databricks.
- Use the Azure Databricks CLI to create a directory named dbfs:/databricks/spark-monitoring:dbfs mkdirs dbfs:/databricks/spark-monitoring
- Open the /src/spark-listeners/scripts/spark-monitoring.sh script file and add your Log Analytics Workspace ID and Key to the lines
below:export LOG_ANALYTICS_WORKSPACE_ID= export LOG_ANALYTICS_WORKSPACE_KEY=
- Use the Azure Databricks CLI to copy /src/spark-listeners/scripts/spark-monitoring.sh to the directory created in step 3:dbfs cp <local path to spark-monitoring.sh> dbfs:/databricks/spark-monitoring/spark-monitoring.sh
- Use the Azure Databricks CLI to copy all of the jar files from the spark-monitoring/src/target folder to the directory created in step 3:dbfs cp –overwrite –recursive <local path to target folder> dbfs:/databricks/spark-monitoring/
Now it is all set to query in log analytics workspace to get logs.
Event | search "error"
This query will get all the error level logs of the generate event. Similarly, we can get logs of different classes.
Published at DZone with permission of Shubham Dangare. See the original article here.
Opinions expressed by DZone contributors are their own.