Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

What Is Data Redundancy?

DZone's Guide to

What Is Data Redundancy?

When you start compiling true big data sets, having redundant records is a massive headache. Read on to learn how to avoid redundancy.

· Big Data Zone ·
Free Resource

Learn how to operationalize machine learning and data science projects to monetize your AI initiatives. Download the Gartner report now.

Data Redundancy Explained

Data redundancy occurs when the same piece of data is stored in two or more separate places. Suppose you create a database to store sales records, and in the records for each sale, you enter the customer address. Yet, you have multiple sales to the same customer so the same address is entered multiple times. The address that is repeatedly entered is redundant data.

How Does Data Redundancy Occur?

Data redundancy can be designed; for example, suppose you want to back up your company’s data nightly. This creates a redundancy. Data redundancy can also occur by mistake. For example, the database designer who created a system with a new record for each sale may not have realized that his design caused the same address to be entered repeatedly. You may also end up with redundant data when you store the same information in multiple systems. For instance, suppose you store the same basic employee information in Human Resources records and in records maintained for your local site office.

Why Data Redundancy Can Be a Problem

When data redundancy is unplanned, it can be a problem. For example, if you have a Customers table which includes the address as one of the data fields, and the John Doe family conducts business with you and all live at the same address, you will have multiple entries of the same address in your database. If the John Doe family moves, you'll need to update the address for each family member, which can be time-consuming, and introduce the possibility of entering a mistake or a typo for one of the addresses. In addition, each entry of the address that is unnecessary takes up additional space that becomes costly over time. Lastly, the more redundancy, the greater difficulty in maintaining the data. These problems — inconsistent data, wasted space, and effort to maintain data — can become a major headache for companies with lots of data.

How Data Redundancy Is Resolved for a Database

It’s not possible or practical to have zero data redundancy, and many database administrators consider it acceptable to have a certain amount of data redundancy if there is a central master field. Master data is a single source of common business data used across multiple systems or applications. It is usually non-transactional data, such as a list of customers and their contact information. The master data ensures that if a piece of data changes, you update the data only one time, which allows you to prevent data inconsistencies.

In addition, the process of normalization is commonly used to remove redundancies. When you normalize the data, you organize the columns (attributes) and tables (relations) of a database to ensure that their dependencies are correctly enforced by database integrity constraints. The set of rules for normalizing data is called a normal form, and a database is considered "normalized" if it meets the third normal form, meaning that it is free of insert, delete, and update anomalies.

How Data Redundancy Is Resolved Between Different Systems

But, what if you have data redundancy between multiple systems? For example, what if you have employee information stored in a departmental database, a Human Resources database, and a database for your site office? Many companies handle this by integrating their data and removing the redundancies when doing so. However, this can be a time-intensive process and can involve multiple steps, including designing the data warehouse and cleansing the data. If you're interested, see What is Data Integration? for more details about the process of data integration.

Bias comes in a variety of forms, all of them potentially damaging to the efficacy of your ML algorithm. Our Chief Data Scientist discusses the source of most headlines about AI failures here.

Topics:
big data ,data redundancy ,data management ,data integration ,data warehouses

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}