What Is Data Profiling?
What Is Data Profiling?
Data profiling is a process of examining data from an existing source and summarizing information about that data. Learn about its benefits!
Join the DZone community and get the full member experience.Join For Free
The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.
Data profiling is a process of examining data from an existing source and summarizing information about that data. You profile data to determine the accuracy, completeness, and validity of your data. Data profiling can be done for many reasons, but it is most commonly part of helping to determine data quality as a component of a larger project. Commonly, data profiling is combined with an ETL (Extract, Transform, and Load) process to move data from one system to another. When done properly, ETL and data profiling can be combined to cleanse, enrich, and move quality data to a target location.
For example, you might want to perform data profiling when migrating from a legacy system to a new system. Data profiling can help identify data quality issues that need to be handled in the code when you move data into your new system. Or, you might want to perform data profiling as you move data to a data warehouse for business analytics. Often when data is moved to a data warehouse, ETL tools are used to move the data. Data profiling can be helpful in identifying what data quality issues must be fixed in the source, and what data quality issues can be fixed during the ETL process.
Why Profile Data?
Data profiling allows you to answer the following questions about your data:
- Is the data complete? Are there blank or null values?
- Is the data unique? How many distinct values are there? Is the data duplicated?
- Are there anomalous patterns in your data? What is the distribution of patterns in your data?
- Are these the patterns you expect?
- What range of values exist, and are they expected? What are the maximum, minimum, and average values for given data? Are these the ranges you expect?
Answering these questions helps you ensure that you are maintaining quality data, which — companies are increasingly realizing — is the cornerstone of a thriving business. For more information, see our post on data quality.
How Do You Profile Data?
Data profiling can be performed in different ways, but there are roughly three base methods used to analyze the data.
Column profiling counts the number of times every value appears within each column in a table. This method helps to uncover the patterns within your data.
Cross-column profiling looks across columns to perform key and dependency analysis. Key analysis scans collections of values in a table to locate a potential primary key. Dependency analysis determines the dependent relationships within a data set. Together, these analyses determine the relationships and dependencies within a table.
Cross-table profiling looks across tables to identify potential foreign keys. It also attempts to determine the similarities and differences in syntax and data types between tables to determine which data might be redundant and which could be mapped together.
Rule validation is sometimes considered the final step in data profiling. This is a proactive step of adding rules that check for the correctness and integrity of the data that is entered into the system.
These different methods may be performed manually by an analyst, or they may be performed by a service that can automate these queries.
Data Profiling Challenges
Data profiling is often difficult due to the sheer volume of data you’ll need to profile. This is especially true if you are looking at a legacy system. A legacy system might have years of older data with thousands of errors. Experts recommend that you segment your data as a part of your data profiling process so that you can see the forest for the trees.
If you manually perform your data profiling, you’ll need an expert to run numerous queries and sift through the results to gain meaningful insights about your data, which can eat up precious resources. In addition, you will likely only be able to check a subset of your overall data because it is too time-consuming to go through the entire data set.
Published at DZone with permission of Garrett Alley , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.