Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using AI to Clean Up Big Data

DZone's Guide to

Using AI to Clean Up Big Data

Adi Gaskell talks about the challenge of cleansing data and how software, like ActiveClean, uses prediction models that are used with the cleaning process.

· Big Data Zone
Free Resource

Intelligently automate your Big Data operations to lower your costs, make your team more productive, scale more efficiently, and lower the risk of failure. Learn how >>

data-cleaningBig data is a hot topic right now, but the successful utilization of that data largely rests on the ability of organizations to provide clean, accurate and usable data to employees to make real-time insights.  Suffice to say, much of the data held in organizational databases is anything but clean, and few organizations seem willing to undertake the laborious job of cleaning it up.

AI may be about to come to the rescue, as a team of researchers from Columbia University and the University of California at Berkeley has developed some automated software to do the job for you.

AI to the Rescue

The software, called ActiveClean, uses prediction models to test out datasets, and uses the results to understand the fields that require cleaning whilst simultaneously updating the models at the same time.

“Big data sets are still mostly combined and edited manually, aided by data-cleaning software like Google Refine and Trifacta or custom scripts developed for specific data-cleaning tasks,” the researchers say. “The process consumes up to 80 percent of analysts’ time as they hunt for dirty data, clean it, retrain their model and repeat the process. Cleaning is largely done by guesswork.”

As with so many laborious processes, human error can be a significant factor, so ActiveClean takes them out of the equation in two of the most error prone areas: finding the dirty data to begin with, and then updating models accordingly.

The software uses machine learning to analyze the structure of the model to then determine the kind of errors such a model is likely to generate.  When the software was tested against a couple of control methods, with positive results.

The testing was undertaken using the Dollars for Docs database on ProPublica, which has over 240,000 records of corporate donations to doctors.  The data was notably ‘messy’, with multiple names for a single drug commonplace.  Because of this messiness, improper donations could only be detected 66% of the time, but after just 5,000 records had been cleaned by ActiveClean, this jumped to 90%.

“As datasets grow larger and more complex, it’s becoming more and more difficult to properly clean the data,” the researchers say. “ActiveClean uses machine learning techniques to make data cleaning easier while guaranteeing you won’t shoot yourself in the foot.”

Find the perfect platform for a scalable self-service model to manage Big Data workloads in the Cloud. Download the free O'Reilly eBook to learn more.

Topics:
big data ,data cleaning ,artificial intelligence

Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}