# Data Transformation Tips for the Trenches

### Here are some quick and dirty data transformation techniques you might find useful. I'm not much into exposition, so let's get started.

Join the DZone community and get the full member experience.

Join For FreeHere are some quick and dirty data transformation techniques you might find useful. I'm not much into exposition, so that we can get started.

Depending on the application, data transformation has different purposes. It can prevent over-fitting for regression problems. In classification, it improves model accuracy significantly.

In contrast, scaling did not improve the performance of non-supervised learning techniques, such as k-means clustering. Scaling would still be beneficial in this scenario since it reduces data sparsity.

## Techniques for Transformation

It depends on the data and application in hand, which transformation technique is appropriate:

### Importance-Based Transformation

One of the most important decisions when transforming variables is choosing which variable to transform. It can do this automatically using feature selection algorithms such as the VIF test or decision tree random forest model.

### Variable Distribution

The type of output variable dictates how the model should be, the type of loss function, and the performance metric (i.e., binary filter result or probability values).

Importance of interpretation: I focus my analysis on understanding industries, groups, or cases rather than prediction, then it makes more sense to keep variables original.

### Logarithmic Transformation

Logarithmic transformation involves the natural log of all values in a variable, thus rewarding small positive values disproportionately.

Common cases are:

- The target variable is nonlinear with the predictor Log target variable would affirm the linear relationship.
- High levels of data skew or SKEW is 0. You can us arithmetic operations on transformed data, although the monitor metrics will be non-entity (transformed variables). Important note: to apply a log on a zero or negative value, then addition/subtraction of arbitrary constant value is necessary to ensure the transformed variable > 0.
- For variable X with arbitrary constant C will be Y=log(X+C)—different C value gives different distributions, so I recommend testing results by checking metrics of interest.

### Log Normalization

If a variable contains a natural log (base e), then its distribution is log-normal.

Here’s a log trick: In the case where values of a variable distribute uniformly, its logarithm would also follow a uniform distribution. Therefore, evidence of uniformity in the transformed data means that the original data is exponentially distributed.

This property allows one to detect the underlying exponential structure of raw data using a simple EDA technique. I recommend trying both transforming and non-transformation cases before deciding on any conclusion.

### Standardization vs. Normalization

Both are ways of scaling variables so that their mean value is 0 and variance is 1. For standardization, scaling is based on variability, whereas you normalize the process scale (or range) for normalization. (It might go without saying, but I'll say it anyway.)

To apply these techniques, one must decide if being on the same scale or with the same range is more important. There are many variations of normalization techniques, such as Min-Max Scaling, standardization, and many more, which data scientists widely use.

### Standardization vs. Min-Max Scaling Clipping

It involves discarding raw data that exceeds the extreme value threshold in order to reduce the variability of transformed data.

### Transforming Into Normal Distribution

You can just simply transform your variable into the desired distribution. Format conversion For text variables may include lemmatization, stemming, and stop word removal. Numeric variables may replace values with similar numeric variables, Others Transformation pipelines, and re-indexing, among others.

There are tens of transformation techniques, but only some will play off well with your data set based on its format and type. The choice depends heavily on experience. Here are some examples using Python that I cherry-picked:

```
jobNumsFormed["Income"] = jobNumsFormed["Income"].str[1:]
removeComma = "{:,}"
jobNumsFormed["Income"] = jobNumsFormed["Income"].str.replace(removeComma,"")
jobNumsFormed["Income"] = jobNumsFormed["Income"].astype("int")/1000000 #replace string with int and scale down into million dollar
```

Opinions expressed by DZone contributors are their own.

Comments