Ensuring Data Quality With Great Expectations and Databricks
Ensure data quality in pipelines with Great Expectations. Learn to integrate with Databricks, validate data, and automate checks for reliable datasets.
Join the DZone community and get the full member experience.
Join For FreeData quality checks are critical for any production pipeline. While there are many ways to implement them, the Great Expectations library is a popular one.
Great Expectations is a powerful tool for maintaining data quality by defining, managing, and validating expectations for your data. In this article, we will discuss how you can use it to ensure data quality in your data pipelines.
Integrating Great Expectations With Databricks
Great Expectations is a popular open-source data quality and testing framework that helps data teams to define, document, and monitor data quality expectations for their datasets.
Integrating Great Expectations with Databricks allows you to automate data quality checks within your Databricks workflows, ensuring that your data is accurate, consistent, and reliable.
Great Expectations can be used with a wide variety of data platforms. It is designed to be flexible and can integrate with different data sources, including databases, data warehouses, data lakes, and file systems. Here are some common data platforms and how you can use Great Expectations with them:
Supported Data Platforms
Relational Databases
- PostgreSQL
- MySQL
- SQLite
- SQL Server
- Oracle
from sqlalchemy import create_engine
import great_expectations as ge
engine = create_engine("postgresql://devsandbox:xxxxxxxxx@host:5432/rayandb")
context = ge.data_context.DataContext()
batch_request = {
"datasource_name": "nycstocks",
"data_connector_name": "default_inferred_data_connector_name",
"data_asset_name": "Rayankula",
}
Data Warehouses
- Snowflake
- Amazon Redshift
- Google Big Query
Data Lakes
- Amazon S3
- Azure Data Lake Storage
- Google Cloud Storage
File Systems
- Local File System
- HDFS
Big Data Platforms
- Apache Spark
- Databricks
A Step-by-Step Guide
Prerequisites
- Databricks workspace. Ensure you have an active Databricks workspace.
- Great Expectations installation. Install Great Expectations in your Databricks environment.
Step 1: Install Great Expectations
In your Databricks workspace, create a new notebook. Install the Great Expectations library using the following command:
%pip install great_expectations
Step 2: Initialize Great Expectations
Initialize a Great Expectations project in your Databricks notebook:
import great_expectations as ge
# Initialize a DataContext
context = ge.data_context.DataContext()
Step 3: Create and Validate Expectations
Load your data into a Spark DataFrame:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("GreatExpectationsExample").getOrCreate()
# Load dataset
dataset_path = "/path/to/rayankula/dataset.csv"
df = spark.read.csv(dataset_path, header=True, inferSchema=True)
Create expectations for your data. For example, let’s create expectations for column data types and null values:
# Convert Spark DataFrame to Great Expectations DataFrame
df_ge = ge.dataset.SparkDFDataset(df)
# Create expectations
df_ge.expect_column_to_exist("column1")
df_ge.expect_column_values_to_not_be_null("column1")
df_ge.expect_column_values_to_be_in_set("column2", ["value1", "value2", "value3"])
df_ge.expect_column_mean_to_be_between("column3", min_value=10, max_value=20)
Validate your data against the defined expectations:
# Validate the data
validation_result = df_ge.validate()
# Print the validation results
print(validation_result)
Step 4: Save and Load Expectations
Save the expectations to a JSON file:
# Save expectations to a JSON file
expectations_path = "/path/to/save/expectations.json"
df_ge.save_expectation_suite(expectation_suite_name="example_suite", filepath=expectations_path)
Load the expectations from a JSON file:
# Load expectations from a JSON file
df_ge.load_expectation_suite(filepath=expectations_path)
Step 5: Generate Data Docs
Generate data documentation to visualize the validation results:
# Generate data docs
context.build_data_docs()
# Open data docs in a web browser
context.open_data_docs()
Example Use Case: Validating a Sales Dataset
Step 1: Load Sales Data
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("GreatExpectationsSalesExample").getOrCreate()
# Load sales dataset
sales_data_path = "/path/to/sales_data.csv"
sales_df = spark.read.csv(sales_data_path, header=True, inferSchema=True)
Step 2: Create Expectations for Sales Data
import great_expectations as ge
# Convert Spark DataFrame to Great Expectations DataFrame
sales_df_ge = ge.dataset.SparkDFDataset(sales_df)
# Create expectations for sales data
sales_df_ge.expect_column_to_exist("order_id")
sales_df_ge.expect_column_values_to_not_be_null("order_id")
sales_df_ge.expect_column_values_to_be_unique("order_id")
sales_df_ge.expect_column_values_to_not_be_null("order_date")
sales_df_ge.expect_column_values_to_match_regex("order_date", r"\d{4}-\d{2}-\d{2}")
sales_df_ge.expect_column_values_to_be_in_set("order_status", ["completed", "pending", "canceled"])
sales_df_ge.expect_column_values_to_be_between("order_amount", min_value=0, max_value=10000)
Step 3: Validate Sales Data
# Validate the sales data
sales_validation_result = sales_df_ge.validate()
# Print the validation results
print(sales_validation_result)
Step 4: Save and Load Expectations for Sales Data
# Save expectations to a JSON file
sales_expectations_path = "/path/to/save/sales_expectations.json"
sales_df_ge.save_expectation_suite(expectation_suite_name="sales_suite", filepath=sales_expectations_path)
# Load expectations from a JSON file
sales_df_ge.load_expectation_suite(filepath=sales_expectations_path)
Step 5: Generate Data Docs for Sales Data
# Initialize a DataContext
context = ge.data_context.DataContext()
# Generate data docs
context.build_data_docs()
# Open data docs in a web browser
context.open_data_docs()
Conclusion
By following the steps outlined above, you can create, validate, save, and load expectations for your data and generate data documentation to visualize the validation results. This integration provides a powerful platform for ensuring data quality and reliability in your data pipelines.
Opinions expressed by DZone contributors are their own.
Comments