DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • When Doris Meets Iceberg: A Data Engineer's Redemption
  • Doris Lakehouse Integration: A New Approach to Data Analysis
  • The Future of Data Lakehouses: Apache Iceberg Explained
  • Relational DB Migration to S3 Data Lake Via AWS DMS, Part I

Trending

  • Rust, WASM, and Edge: Next-Level Performance
  • Chat With Your Knowledge Base: A Hands-On Java and LangChain4j Guide
  • Build a Simple REST API Using Python Flask and SQLite (With Tests)
  • The Future of Java and AI: Coding in 2025
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Validating Data in the Data Lake

Validating Data in the Data Lake

How do you design your data lake architecture and what functionality do you need to validate the data in your data lake? Learn about it right now!

By 
Tony Fisher user avatar
Tony Fisher
·
Updated Jan. 09, 17 · Opinion
Likes (4)
Comment
Save
Tweet
Share
8.2K Views

Join the DZone community and get the full member experience.

Join For Free

Can you trust the data in your data lake? Many companies are guilty of dumping data into the data lake without a strategy for keeping track of what’s being ingested. This leads to a murky, swampy repository. If you don’t have transparency into your lake so that you can feel confident using the data, what’s the point of deploying a data lake in the first place?

You know Hadoop is a different animal than the data warehouse, requiring distinct technologies and skill sets. Unlike relational databases, Hadoop is little help when it comes to quality control. Without incorporating additional tools into your data lake architecture you have no way to apply metadata to your data as it is ingested. You can’t automate metadata management so that you can scale to the volume and velocity of big data. You also have no way to customize rules for different data types from different sources.

How do you design your data lake architecture and what functionality do you need to validate the data in your data lake? From our years of experience deploying data lakes for leading companies across heavily regulated industries like financial services and healthcare, we’ve developed some best practices to help companies clean up and derive more value from their data lakes.

Create Data Zones

Managing data ingestion requires thinking about where the data should land in your lake and where it goes after it’s ingested, in line with your data lifecycle management strategy. We recommend creating zones in the file system of your data lake, dedicated for specific uses; namely,  “transient,” “raw,” “trusted” and “refined” zones. By building a rule-based architecture tied to the metadata that’s applied upon ingestion, you can automate validating the data as you move it from zone to zone. You also may want to incorporate a discovery sandbox “zone,” moving trusted data there for wrangling, discovery and exploratory analysis.

  • Transient zone: This is the loading zone, where you can perform basic quality checks using MapReduce or Spark.
  • Raw zone: Next, data is loaded into the raw data zone where it can be masked or tokenized to protect sensitive data, such as personally identifiable information (PII), personal health information (PHI) or payment card industry (PCI) information. This is where raw datasets exist for business analysts and data scientists to access.
  • Trusted zone: This zone functions as the single source of truth for data after it has been cleaned and validated. It can contain both master data and reference data. We define master data as basic data sets, such as basic customer information (e.g., names, addresses), which needs to be kept up to date using change data capture (CDC) mechanisms. Reference data consists of more complex datasets, such as datasets created with merged information from multiple sources – for example, more detailed customer profiles.
  • Refined zone: In the refined zone, data is enriched, prepared, and curated for analysis. This is where data can be used by applications, business analysts or business intelligence tools.

Screen Shot 2016-12-15 at 1.01.08 PM.png

Give Business Users a Say

We’ve found it’s a good idea to employ a rule-based data policy engine to enable business users to define the business rules used for data quality validation, as they typically are most familiar with the data. Also, by allowing business users to associate these rules with the metadata for the ingested datasets, it becomes part of the workflow from an orchestration perspective. For example, as the data leaves the raw zone, you’re able to execute these rules before you make the data available in the trusted zone.

Rate Your Data

The beauty of Hadoop is that it captures all raw data, unlike relational databases, which will only capture data if it meets your schema criteria. Although this presents challenges for data quality, you have the option to determine the level of acceptable data quality, allowing you to potentially get more value from your data. In other words, Hadoop can store “incomplete” datasets, which may be useful for users who don’t need certain information. For example, some customer records may be missing gender data, but contain what the user needs, such as state of residence. To provide a clear picture of what data may be incomplete or of a certain quality, it’s important to separate the “good” records from the “bad” records and create thresholds that specify how many bad records are tolerable before a dataset is rejected.

Go Back to the Source

Automating data quality checks as data moves from one zone to another is key. Another area to automate is reporting back to data producers. If “bad” records – according to the criteria you’ve established – are ingested, you can automate reporting back to the data producer so that they can rectify those records. Then you can re-ingest them back into the data lake.

The backbone of successful data validation in the data lake? Operationalizing data validation in the data lake ultimately requires an underlying data management platform that manages, tracks and governs metadata, data quality, and security. Of course, when we work with our clients, we build an architecture based on Zaloni Bedrock, which gives businesses the control and governance they need to feel confident in the quality of their data.

Data science Data lake Database

Published at DZone with permission of Tony Fisher, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • When Doris Meets Iceberg: A Data Engineer's Redemption
  • Doris Lakehouse Integration: A New Approach to Data Analysis
  • The Future of Data Lakehouses: Apache Iceberg Explained
  • Relational DB Migration to S3 Data Lake Via AWS DMS, Part I

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!