DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Full-Stack Observability Essentials: Explore the fundamentals of system-wide observability and key components of the OpenTelemetry standard.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • Data Annotation's Essential Role in Machine Learning Success
  • Auditing Spring Boot Using JPA, Hibernate, and Spring Data JPA
  • OneStream Fast Data Extracts APIs
  • Artificial Intelligence (AI) Utilizing Deep Learning Techniques to Enhance ADAS

Trending

  • Docker and Kubernetes Transforming Modern Deployment
  • Log Analysis Using grep
  • Demystifying Enterprise Integration Patterns: Bridging the Gap Between Systems
  • Automated Testing Lifecycle
  1. DZone
  2. Data Engineering
  3. Data
  4. What Is Data Profiling?

What Is Data Profiling?

Data profiling is a process of examining data from an existing source and summarizing information about that data. Learn about its benefits!

Garrett Alley user avatar by
Garrett Alley
·
Jan. 21, 19 · Analysis
Like (3)
Save
Tweet
Share
49.12K Views

Join the DZone community and get the full member experience.

Join For Free

Data profiling is a process of examining data from an existing source and summarizing information about that data. You profile data to determine the accuracy, completeness, and validity of your data. Data profiling can be done for many reasons, but it is most commonly part of helping to determine data quality as a component of a larger project. Commonly, data profiling is combined with an ETL (Extract, Transform, and Load) process to move data from one system to another. When done properly, ETL and data profiling can be combined to cleanse, enrich, and move quality data to a target location.

For example, you might want to perform data profiling when migrating from a legacy system to a new system. Data profiling can help identify data quality issues that need to be handled in the code when you move data into your new system. Or, you might want to perform data profiling as you move data to a data warehouse for business analytics. Often when data is moved to a data warehouse, ETL tools are used to move the data. Data profiling can be helpful in identifying what data quality issues must be fixed in the source, and what data quality issues can be fixed during the ETL process.

Why Profile Data?

Data profiling allows you to answer the following questions about your data:

  • Is the data complete? Are there blank or null values?
  • Is the data unique? How many distinct values are there? Is the data duplicated?
  • Are there anomalous patterns in your data? What is the distribution of patterns in your data?
  • Are these the patterns you expect?
  • What range of values exist, and are they expected? What are the maximum, minimum, and average values for given data? Are these the ranges you expect?

Answering these questions helps you ensure that you are maintaining quality data, which — companies are increasingly realizing — is the cornerstone of a thriving business. For more information, see our post on data quality.

How Do You Profile Data?

Data profiling can be performed in different ways, but there are roughly three base methods used to analyze the data.

  • Column profiling counts the number of times every value appears within each column in a table. This method helps to uncover the patterns within your data.

  • Cross-column profiling looks across columns to perform key and dependency analysis. Key analysis scans collections of values in a table to locate a potential primary key. Dependency analysis determines the dependent relationships within a data set. Together, these analyses determine the relationships and dependencies within a table.

  • Cross-table profiling looks across tables to identify potential foreign keys. It also attempts to determine the similarities and differences in syntax and data types between tables to determine which data might be redundant and which could be mapped together.

Rule validation is sometimes considered the final step in data profiling. This is a proactive step of adding rules that check for the correctness and integrity of the data that is entered into the system.

These different methods may be performed manually by an analyst, or they may be performed by a service that can automate these queries.

Data Profiling Challenges

Data profiling is often difficult due to the sheer volume of data you’ll need to profile. This is especially true if you are looking at a legacy system. A legacy system might have years of older data with thousands of errors. Experts recommend that you segment your data as a part of your data profiling process so that you can see the forest for the trees.

If you manually perform your data profiling, you’ll need an expert to run numerous queries and sift through the results to gain meaningful insights about your data, which can eat up precious resources. In addition, you will likely only be able to check a subset of your overall data because it is too time-consuming to go through the entire data set.

Data (computing) Data profiling

Published at DZone with permission of Garrett Alley, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Data Annotation's Essential Role in Machine Learning Success
  • Auditing Spring Boot Using JPA, Hibernate, and Spring Data JPA
  • OneStream Fast Data Extracts APIs
  • Artificial Intelligence (AI) Utilizing Deep Learning Techniques to Enhance ADAS

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: