Pandas/scikit-learn: get_dummies Test/Train Sets
In my time using get_dummies in panda to generate dummy columns for categorical variables to use with scikit-learn, I realized it didn't always work. Here's why.
Join the DZone community and get the full member experience.
Join For FreeI’ve been using panda’s get_dummies
function to generate dummy columns for categorical variables to use with scikit-learn
, but noticed that it sometimes doesn’t work as I expect.
Prerequisites:
import pandas as pd
import numpy as np
from sklearn import linear_model
Let’s say we have the following training and test sets.
Training set:
train = pd.DataFrame({"letter":["A", "B", "C", "D"], "value": [1, 2, 3, 4]})
X_train = train.drop(["value"], axis=1)
X_train = pd.get_dummies(X_train)
y_train = train["value"]
Test set:
test = pd.DataFrame({"letter":["D", "D", "B", "E"], "value": [4, 5, 7, 19]})
X_test = test.drop(["value"], axis=1)
X_test = pd.get_dummies(X_test)
y_test = test["value"]
Now say we want to train a linear model on our training set and then use it to predict the values in our test set.
Train the model:
lr = linear_model.LinearRegression()
model = lr.fit(X_train, y_train)
Test the model:
model.score(X_test, y_test)
ValueError: shapes (4,3) and (4,) not aligned: 3 (dim 1) != 4 (dim 0)
Hmmm, that didn’t go to plan. If we print X_train
and X_test
, we might help shed some light.
Checking the train/test datasets:
print(X_train)
letter_A letter_B letter_C letter_D
0 1 0 0 0
1 0 1 0 0
2 0 0 1 0
3 0 0 0 1
print(X_test)
letter_B letter_D letter_E
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
They do indeed have different shapes and some different column names because the test set contained some values that weren’t present in the training set.
We can fix this by making the ‘letter’ field categorical before we run the get_dummies
method over the dataframe. At the moment the field is of type ‘object’:
Column types:
print(train.info)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4 entries, 0 to 3
Data columns (total 2 columns):
letter 4 non-null object
value 4 non-null int64
dtypes: int64(1), object(1)
memory usage: 144.0+ bytes
We can fix this by converting the ‘letter’ field to the type ‘category’ and setting the list of allowed values to be the unique set of values in the train/test sets.
All allowed values:
all_data = pd.concat((train,test))
for column in all_data.select_dtypes(include=[np.object]).columns:
print(column, all_data[column].unique())
letter ['A' 'B' 'C' 'D' 'E']
Now let’s update the type of our ‘letter’ field in the train and test dataframes.
Type ‘category’:
all_data = pd.concat((train,test))
for column in all_data.select_dtypes(include=[np.object]).columns:
train[column] = train[column].astype('category', categories = all_data[column].unique())
test[column] = test[column].astype('category', categories = all_data[column].unique())
And now if we call get_dummies
on either dataframe, we’ll get the same set of columns:
get_dummies
, take 2:
X_train = train.drop(["value"], axis=1)
X_train = pd.get_dummies(X_train)
print(X_train)
letter_A letter_B letter_C letter_D letter_E
0 1 0 0 0 0
1 0 1 0 0 0
2 0 0 1 0 0
3 0 0 0 1 0
X_test = test.drop(["value"], axis=1)
X_test = pd.get_dummies(X_test)
print(X_train)
letter_A letter_B letter_C letter_D letter_E
0 0 0 0 1 0
1 0 0 0 1 0
2 0 1 0 0 0
3 0 0 0 0 1
Great! Now we should be able to train our model and use it against the test set:
Train the model, take 2:
lr = linear_model.LinearRegression()
model = lr.fit(X_train, y_train)
Test the model, take 2:
model.score(X_test, y_test)
-1.0604490500863557
And we’re done!
Published at DZone with permission of Mark Needham, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments