Advanced ETL Functionality and Machine Learning Pre-Processing [Video]
Learn commonly used advanced ETL functionalities for machine learning-based outlier detection, feature generation, imputing missing values, and more!
Join the DZone community and get the full member experience.Join For Free
Today, we look at a dataset that supposedly is already clean, joined with the right additional information, and in the right shape — and we want to use it to train a prediction model. Unfortunately, a quick glance at the dataset reveals that it still has tons of missing values, it is not normalized, and it contains too many very similar features.
This means that any algorithm would have a really hard time to train a good prediction model on it. Most likely, it would produce a great candidate for a garbage-in/garbage-out type of model.
So, before we can start with the fun part and train the model, we need to run some pre-processing. Because we know that the quality of the model can be only as good as the quality of the input data.
To improve the quality of the dataset, we proceed with different pre-processing steps. We delete outliers, create new features from the raw data, impute missing values, reduce dimensionality, and much more, including a number of automatic and machine learning approaches. It is all described in the video below.
Summarizing, this video is an overview of the pre-processing techniques needed before training a model and of the native KNIME nodes suitable implement them.
Published at DZone with permission of Kathrin Melcher, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.