That Damn RSquared
Join the DZone community and get the full member experience.
Join For FreeAnother post about the Rsquared coefficient, and about why, after some years teaching econometrics, I still hate when students ask questions about it. Usually, it starts with "I have a _____ Rsquared... isn't it too low ?" Please, feel free to fill in the blanks with your favorite (low) number. Say 0.2. To make it simple, there are different answers to that question:
If you don't want to waste time understanding econometrics, I would say something like "Forget about the Rsquared, it is useless" (perhaps also "please, think twice about taking that econometrics course")
If you're ready to spend some time to get a better understanding on subtle concepts, I would say "I don't like the Rsquared. I might be interesting in some rare cases (you can probably count them on the fingers of one finger), like comparing two models on the same dataset (even so, I would recommend the adjusted one). But usually, its values has no meaning. You can compare 0.2 and 0.3 (and prefer the 0.3 Rsquared model, rather than the 0.2 Rsquared one), but 0.2 means nothing." Well, not exactly, since it means something, but it is not a measure tjat tells you if you deal with a good or a bad model. Well, again, not exactly, but it is rather difficult to say where bad ends, and where good starts. Actually, it is exactly like the correlation coefficient (well, there is nothing mysterious here since the Rsquared can be related to some correlation coefficient, as mentioned in class)
If you want some more advanced advice, I would say "It's complicated..." (and perhaps also "Look in a textbook written by someone more clever than me  you can find hundreds of them in the library!")
If you want me to act like people we've seen recently on TV (during political speeches), "It's extremely interesting, but before answering your question, let me tell you a story..."
> set.seed(1) > n=20 > X=runif(n) > E=rnorm(n) > Y=2+5*X+E*.5 > base=data.frame(X,Y) > reg=lm(Y~X,data=base) > summary(reg) Call: lm(formula = Y ~ X, data = base) Residuals: Min 1Q Median 3Q Max 1.15961 0.17470 0.08719 0.29409 0.52719 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 2.4706 0.2297 10.76 2.87e09 *** X 4.2042 0.3697 11.37 1.19e09 ***  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.461 on 18 degrees of freedom Multiple Rsquared: 0.8778, Adjusted Rsquared: 0.871 Fstatistic: 129.3 on 1 and 18 DF, pvalue: 1.192e09
The Rsquared is high (almost 0.9). What if the underlying model is exactly the same, but now, the noise has a much higher variance?
> Y=2+5*X+E*4 > base=data.frame(X,Y) > reg=lm(Y~X,data=base) > summary(reg) Call: lm(formula = Y ~ X, data = base) Residuals: Min 1Q Median 3Q Max 9.2769 1.3976 0.6976 2.3527 4.2175 Coefficients: Estimate Std. Error t value Pr(>t) (Intercept) 5.765 1.837 3.138 0.00569 ** X 1.367 2.957 0.462 0.64953  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.688 on 18 degrees of freedom Multiple Rsquared: 0.01173, Adjusted Rsquared: 0.04318 Fstatistic: 0.2136 on 1 and 18 DF, pvalue: 0.6495
Now, the Rsquared is rather low (around 0.01). Thus, the quality of the regression depends clearly on the variance of the noise. The higher the variance, the lower the Rsquared. And usually, there is not much you can do about it! On the graph below, the noise is changing, from nonoise, to extremely noisy, with the least square regression in blue (and a confidence interval on the prediction)
If we compare with the graph below, one can observe that the quality of the fit depends on the sample size, with now 100 observations (instead of 20),
> S=seq(0,4,by=.2) > R2=rep(NA,length(S)) > for(s in 1:length(S)){ + Y=2+5*X+E*S[s] + base=data.frame(X,Y) + reg=lm(Y~X,data=base) + R2[s]=summary(reg)$r.squared}
Nevertheless, it looks like some econometricians really care about the Rsquared, and cannot imagine looking at a model if the Rsquared is lower than, say, 0.4. It is always possible to reach that level! you just have to add more covariates! If you have some. And if you don't, it is always possible to use polynomials of a continuous variate. For instance, on the previous example,
> S=seq(1,25,by=1) > R2=rep(NA,length(S)) > for(s in 1:length(S)){ + reg=lm(Y~poly(X,degree=s),data=base) + R2[s]=summary(reg)$r.squared}
If we plot the Rsquared as a function of the degree of the polynomial regression, we have the following graph. Once again, the higher the degree, the more covariates, and the more covariates, the higher the Rsquared,
Published at DZone with permission of Arthur Charpentier, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending

Why You Should Consider Using React Router V6: An Overview of Changes

Guide To Selecting the Right GitOps Tool  Argo CD or Flux CD

AutoScaling Kinesis Data Streams Applications on Kubernetes

Batch Request Processing With API Gateway
Comments