DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Integrating Apache Doris and Hudi for Data Querying and Migration
  • Building Scalable Data Lake Using AWS
  • Scaling Image Deduplication: Finding Needles in a Haystack
  • Revolutionizing Catalog Management for Data Lakehouse With Polaris Catalog

Trending

  • How the Go Runtime Preempts Goroutines for Efficient Concurrency
  • A Guide to Developing Large Language Models Part 1: Pretraining
  • Transforming AI-Driven Data Analytics with DeepSeek: A New Era of Intelligent Insights
  • How to Convert XLS to XLSX in Java
  1. DZone
  2. Data Engineering
  3. Data
  4. Ensuring Data Quality With Great Expectations and Databricks

Ensuring Data Quality With Great Expectations and Databricks

Ensure data quality in pipelines with Great Expectations. Learn to integrate with Databricks, validate data, and automate checks for reliable datasets.

By 
Sairamakrishna BuchiReddy Karri user avatar
Sairamakrishna BuchiReddy Karri
·
Srinivasarao Rayankula user avatar
Srinivasarao Rayankula
·
Mar. 26, 25 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
2.8K Views

Join the DZone community and get the full member experience.

Join For Free

Data quality checks are critical for any production pipeline. While there are many ways to implement them, the Great Expectations library is a popular one. 

Great Expectations is a powerful tool for maintaining data quality by defining, managing, and validating expectations for your data. In this article, we will discuss how you can use it to ensure data quality in your data pipelines.

Integrating Great Expectations With Databricks

Great Expectations is a popular open-source data quality and testing framework that helps data teams to define, document, and monitor data quality expectations for their datasets. 

Integrating Great Expectations with Databricks allows you to automate data quality checks within your Databricks workflows, ensuring that your data is accurate, consistent, and reliable.

Databricks + Great Expectations

Databricks + Great Expectations

Great Expectations can be used with a wide variety of data platforms. It is designed to be flexible and can integrate with different data sources, including databases, data warehouses, data lakes, and file systems. Here are some common data platforms and how you can use Great Expectations with them:

Supported Data Platforms

Relational Databases

  • PostgreSQL
  • MySQL
  • SQLite
  • SQL Server
  • Oracle
Python
 
from sqlalchemy import create_engine
import great_expectations as ge

engine = create_engine("postgresql://devsandbox:xxxxxxxxx@host:5432/rayandb")
context = ge.data_context.DataContext()
batch_request = {
    "datasource_name": "nycstocks",
    "data_connector_name": "default_inferred_data_connector_name",
    "data_asset_name": "Rayankula",
}


Data Warehouses

  • Snowflake
  • Amazon Redshift
  • Google Big Query

Data Lakes

  • Amazon S3
  • Azure Data Lake Storage
  • Google Cloud Storage

File Systems

  • Local File System
  • HDFS

Big Data Platforms

  • Apache Spark
  • Databricks

A Step-by-Step Guide

Prerequisites

  1. Databricks workspace. Ensure you have an active Databricks workspace.
  2. Great Expectations installation. Install Great Expectations in your Databricks environment.

Step 1: Install Great Expectations

In your Databricks workspace, create a new notebook. Install the Great Expectations library using the following command:

Plain Text
 
%pip install great_expectations


Step 2: Initialize Great Expectations

Initialize a Great Expectations project in your Databricks notebook:

Python
 
import great_expectations as ge

# Initialize a DataContext
context = ge.data_context.DataContext()


Step 3: Create and Validate Expectations

Load your data into a Spark DataFrame:

Python
 
from pyspark.sql import SparkSession

spark = SparkSession.builder.appName("GreatExpectationsExample").getOrCreate()

# Load dataset
dataset_path = "/path/to/rayankula/dataset.csv"
df = spark.read.csv(dataset_path, header=True, inferSchema=True)


Create expectations for your data. For example, let’s create expectations for column data types and null values:

Python
 
# Convert Spark DataFrame to Great Expectations DataFrame
df_ge = ge.dataset.SparkDFDataset(df)

# Create expectations
df_ge.expect_column_to_exist("column1")
df_ge.expect_column_values_to_not_be_null("column1")
df_ge.expect_column_values_to_be_in_set("column2", ["value1", "value2", "value3"])
df_ge.expect_column_mean_to_be_between("column3", min_value=10, max_value=20)


Validate your data against the defined expectations:

Python
 
# Validate the data
validation_result = df_ge.validate()

# Print the validation results
print(validation_result)


Step 4: Save and Load Expectations

Save the expectations to a JSON file:

Python
 
# Save expectations to a JSON file
expectations_path = "/path/to/save/expectations.json"
df_ge.save_expectation_suite(expectation_suite_name="example_suite", filepath=expectations_path)


Load the expectations from a JSON file:

Python
 
# Load expectations from a JSON file
df_ge.load_expectation_suite(filepath=expectations_path)


Step 5: Generate Data Docs

Generate data documentation to visualize the validation results:

Python
 
# Generate data docs
context.build_data_docs()

# Open data docs in a web browser
context.open_data_docs()


Example Use Case: Validating a Sales Dataset

Step 1: Load Sales Data

Python
 
from pyspark.sql import SparkSession

spark = SparkSession.builder.appName("GreatExpectationsSalesExample").getOrCreate()

# Load sales dataset
sales_data_path = "/path/to/sales_data.csv"
sales_df = spark.read.csv(sales_data_path, header=True, inferSchema=True)


Step 2: Create Expectations for Sales Data

Python
 
import great_expectations as ge

# Convert Spark DataFrame to Great Expectations DataFrame
sales_df_ge = ge.dataset.SparkDFDataset(sales_df)

# Create expectations for sales data
sales_df_ge.expect_column_to_exist("order_id")
sales_df_ge.expect_column_values_to_not_be_null("order_id")
sales_df_ge.expect_column_values_to_be_unique("order_id")
sales_df_ge.expect_column_values_to_not_be_null("order_date")
sales_df_ge.expect_column_values_to_match_regex("order_date", r"\d{4}-\d{2}-\d{2}")
sales_df_ge.expect_column_values_to_be_in_set("order_status", ["completed", "pending", "canceled"])
sales_df_ge.expect_column_values_to_be_between("order_amount", min_value=0, max_value=10000)


Step 3: Validate Sales Data

Python
 
# Validate the sales data
sales_validation_result = sales_df_ge.validate()

# Print the validation results
print(sales_validation_result)


Step 4: Save and Load Expectations for Sales Data

Python
 
# Save expectations to a JSON file
sales_expectations_path = "/path/to/save/sales_expectations.json"
sales_df_ge.save_expectation_suite(expectation_suite_name="sales_suite", filepath=sales_expectations_path)

# Load expectations from a JSON file
sales_df_ge.load_expectation_suite(filepath=sales_expectations_path)


Step 5: Generate Data Docs for Sales Data

Python
 
# Initialize a DataContext
context = ge.data_context.DataContext()

# Generate data docs
context.build_data_docs()

# Open data docs in a web browser
context.open_data_docs()


Conclusion

By following the steps outlined above, you can create, validate, save, and load expectations for your data and generate data documentation to visualize the validation results. This integration provides a powerful platform for ensuring data quality and reliability in your data pipelines.

Big data Data lake Data quality

Opinions expressed by DZone contributors are their own.

Related

  • Integrating Apache Doris and Hudi for Data Querying and Migration
  • Building Scalable Data Lake Using AWS
  • Scaling Image Deduplication: Finding Needles in a Haystack
  • Revolutionizing Catalog Management for Data Lakehouse With Polaris Catalog

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!