Over a million developers have joined DZone.

That Damn R-Squared

· Big Data Zone

Learn how you can maximize big data in the cloud with Apache Hadoop. Download this eBook now. Brought to you in partnership with Hortonworks.

Another post about the R-squared coefficient, and about why, after some years teaching econometrics, I still hate when students ask questions about it. Usually, it starts with "I have a _____ R-squared... isn't it too low ?" Please, feel free to fill in the blanks with your favorite (low) number. Say 0.2. To make it simple, there are different answers to that question:

  1. If you don't want to waste time understanding econometrics, I would say something like "Forget about the R-squared, it is useless" (perhaps also "please, think twice about taking that econometrics course")

  2. If you're ready to spend some time to get a better understanding on subtle concepts, I would say "I don't like the R-squared. I might be interesting in some rare cases (you can probably count them on the fingers of one finger), like comparing two models on the same dataset (even so, I would recommend the adjusted one). But usually, its values has no meaning. You can compare 0.2 and 0.3 (and prefer the 0.3 R-squared model, rather than the 0.2 R-squared one), but 0.2 means nothing." Well, not exactly, since it means something, but it is not a measure tjat tells you if you deal with a good or a bad model. Well, again, not exactly, but it is rather difficult to say where bad ends, and where good starts. Actually, it is exactly like the correlation coefficient (well, there is nothing mysterious here since the R-squared can be related to some correlation coefficient, as mentioned in class)

  3. If you want some more advanced advice, I would say "It's complicated..." (and perhaps also "Look in a textbook written by someone more clever than me -- you can find hundreds of them in the library!")

  4. If you want me to act like people we've seen recently on TV (during political speeches), "It's extremely interesting, but before answering your question, let me tell you a story..."

Perhaps that last strategy is the best one, and I should focus on the story. I mean, this is exactly why I have my blog: to tell (nice) stories. With graphs, and math formulas inside. First of all, consider a regression model

so that the R-squared is defined as

Let us generate datasets, and then run regressions, to see what's going on...
For instance, consider 20 observations, with one variable of interest, one explanatory variable, and some low variance noise (to start with)
> set.seed(1)
> n=20
> X=runif(n)
> E=rnorm(n)
> Y=2+5*X+E*.5
> base=data.frame(X,Y)
> reg=lm(Y~X,data=base)
> summary(reg)
lm(formula = Y ~ X, data = base)
Min       1Q   Median       3Q      Max
-1.15961 -0.17470  0.08719  0.29409  0.52719
Estimate Std. Error t value Pr(>|t|)
(Intercept)   2.4706     0.2297   10.76 2.87e-09 ***
X             4.2042     0.3697   11.37 1.19e-09 ***
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.461 on 18 degrees of freedom
Multiple R-squared: 0.8778,	Adjusted R-squared: 0.871
F-statistic: 129.3 on 1 and 18 DF,  p-value: 1.192e-09 

The R-squared is high (almost 0.9). What if the underlying model is exactly the same, but now, the noise has a much higher variance?

> Y=2+5*X+E*4
> base=data.frame(X,Y)
> reg=lm(Y~X,data=base)
> summary(reg)
lm(formula = Y ~ X, data = base)
Min      1Q  Median      3Q     Max
-9.2769 -1.3976  0.6976  2.3527  4.2175
Estimate Std. Error t value Pr(>|t|)
(Intercept)    5.765      1.837   3.138  0.00569 **
X             -1.367      2.957  -0.462  0.64953
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.688 on 18 degrees of freedom
Multiple R-squared: 0.01173,	Adjusted R-squared: -0.04318
F-statistic: 0.2136 on 1 and 18 DF,  p-value: 0.6495 

Now, the R-squared is rather low (around 0.01). Thus, the quality of the regression depends clearly on the variance of the noise. The higher the variance, the lower the R-squared. And usually, there is not much you can do about it! On the graph below, the noise is changing, from no-noise, to extremely noisy, with the least square regression in blue (and a confidence interval on the prediction)

If we compare with the graph below, one can observe that the quality of the fit depends on the sample size, with now 100 observations (instead of 20),
So far, nothing new (if you remember what was said in class). In those two cases, here is the evolution of the R-squared, as a function of the variance of the noise (more precisely, here, the standard deviation of the noise)
> S=seq(0,4,by=.2)
> R2=rep(NA,length(S))
> for(s in 1:length(S)){
+ Y=2+5*X+E*S[s]
+ base=data.frame(X,Y)
+ reg=lm(Y~X,data=base)
+ R2[s]=summary(reg)$r.squared}


with 20 obervations in blue, 100 in black. One important point is that in econometrics, we rarely choose the number of observations. If we have only 100 observations, we have to deal with it. Similarly, if observations are quite noisy there is (usually) not much we can do about it. All the more if you don't have any explanatory variable left. Perhaps you might play -- or try to play -- with nonlinear effect...

Nevertheless, it looks like some econometricians really care about the R-squared, and cannot imagine looking at a model if the R-squared is lower than, say, 0.4. It is always possible to reach that level! you just have to add more covariates! If you have some. And if you don't, it is always possible to use polynomials of a continuous variate. For instance, on the previous example,


> S=seq(1,25,by=1)
> R2=rep(NA,length(S))
> for(s in 1:length(S)){
+ reg=lm(Y~poly(X,degree=s),data=base)
+ R2[s]=summary(reg)$r.squared}

If we plot the R-squared as a function of the degree of the polynomial regression, we have the following graph. Once again, the higher the degree, the more covariates, and the more covariates, the higher the R-squared,


I.e. with 22 degrees, it is possible to reach a 0.4 R-squared. But it might be interesting to the prediction we have with that model,


So, was it worth adding so many polynomial parts? I mean, 22 is quite a large power. Here, the linear regression was significant, but not great. So what? The R-squared was small? So what? Sometimes, there's not much you can do about it. When dealing with individual observations (so called micro-econometrics), the variable of interest might be extremely noisy, and there is not much you can do. So your R-squared can be small, but your regression is perhaps still better than doing nothing. The good thing with a low R-squared is that it will remind us that we have to remain modest when we build a model. And always be cautious about our conclusion. Or at least provide a confidence interval.

Hortonworks DataFlow is an integrated platform that makes data ingestion fast, easy, and secure. Download the white paper now.  Brought to you in partnership with Hortonworks


Published at DZone with permission of Arthur Charpentier, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}