DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

Probabilistic Foundations of Econometrics: Part 1

Let's get into the details of the history of econometric and machine learning models.

Arthur Charpentier user avatar by
Arthur Charpentier
·
Mar. 27, 19 · Tutorial
Like (1)
Save
Tweet
Share
5.30K Views

Join the DZone community and get the full member experience.

Join For Free
In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially written in French), that will actually appear soon in the journal Economics and Statistics. This is the first one...

The importance of probabilistic models in economics is rooted in Working's (1927) questions and the attempts to answer them in Tinbergen's two volumes (1939). The latter has subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics and more particularly in the first chapter "The Probability Foundations of Econometrics."

It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his "clarification of the foundations of the probabilistic theory of econometrics." Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan's Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons.

First, the use of statistical quantities (or "measures") such as means, standard errors, and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of "dependent" and "non-homogeneous" observations, as they are often found on economic data.

We will then assume that there is a probabilistic space 
(\Omega,\mathcal{F},\mathbb{P})
(Ω, F, P)
 such that observations (yi, xi) are seen as realizations of random variables (Yi, Xi). In practice, however, we are not very interested in the joint law of the couple (Y, X): the law of X is unknown, and it is the law of Y conditional on X that will be interested in. In the following, we will note x a single observation, x a vector of observations, X a random variable, and X a random vector. Abusively, X may also designate the matrix of individual observations (denoted xi), depending on the context.

Foundations of Mathematical Statistics

As recalled in Vapnik's (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters [1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A "golden age" of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs:

  1. To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small.
  2. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law.
  3. The maximum likelihood method is a good tool for estimating parameters.

In this section, we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.

Conditional Laws and Likelihood

Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Yi , Xi ) (if it is possible to imagine temporal observations – then we would have a process (Yt , Xt ) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables Xi, the variables Yi are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}x. In the Gaussian linear model, it is assumed that...

(Y∣X=x)∼LN(μ(x),σ2)    (1)

...where μ(x)=β0+xTβ and β∈Rp.

It is usually called a ‘linear’ model since E[Y∣X=x]=β0+xTβ is a linear combination of covariates [2]. It is said to be a homoscedastic model if Var[Y∣X=x]=σ2, where σ2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written: 

logL(β0,β,σ2∣y,x)=−2nlog[2πσ2]−2σ21i=1∑n(yi−β0−xiTβ)2

Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set:

(β0,β,σ2)=argmax{logL(β0,β,σ2∣y,x)}

The maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.

The first order conditions allow finding the normal equations whose matrix writing is XT[y−Xβ]=0, which can also be written (XTX)β=XTy. If X is a full (column) rank matrix, and then we find the classical estimator...

β=(XTX)−1XTy=β+(XTX)−1X−1ε   (2)

...using residual-based writing (as often in econometrics), y=xTβ+ε. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that β ∼N(β,σ2(XTX)−1), and in particular, if we simply need the first two moments:

E[β]=β   Var[β]=σ2[XTX]−1

In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y∣X has the same distribution as xTβ+ε, where E[ε]=0, Var[ε]=σ2 and Cov[Xj, ε]=0 for all j, then β is an unbiased estimator of β with smallest variance [3] among unbiased linear estimators. Furthermore, if we cannot get normality at a finite distance, asymptotically this estimator is Gaussian, with... 

Image title

...as n→∞, for some matrix Σ.

The condition of having a full rank X matrix can be (numerically) strong in large dimensions. If it is not satisfied, (XTX)−1XT does not exist. If I denotes the identity matrix, however, it should be noted that (XTX+λI)−1XT still exists, whatever λ>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context.

Residuals

It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often...

yi=β0+xiTβ+εi    (3)

...where εi’s are realizations of independent and identically distributed random variables (i.i.d.) from some N(0,σ2) distribution. With a vector notation, we will write ε∼LN(0,σ2I). The estimated residuals are defined as: 

εi =yi −[β 0 +xiTβ ]

Those (estimated) residuals are basic tools for diagnosing the relevance of the model.

An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character...

(Y∣X=x)∼LN(μ(x),σ2(x))

...where σ2(x) is a positive function of the explanatory variables. This model can be rewritten as...

yi =β0 +xiTβ+σ2(xi )⋅εi

...where residuals are always i.i.d., with unit variance:

Image title

While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models or logistic regression.

However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equation...

qi=β0+β1pi+ui

...where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation),

pi =α0 +α1 qi +vi

(vi denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.

References mentioned above are online here. To be continued...

[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010).

[2] Here, we will try to distinguish β0, the intercept, and the other parameters β, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions β will denote the joint vector (β0, β), for general formulas, to avoid too heavy notations.

[3] In the sense that the difference between variance matrices is a positive matrix.

Econometrics Foundation (framework)

Published at DZone with permission of Arthur Charpentier, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Create Spider Chart With ReactJS
  • Is DevOps Dead?
  • What Is API-First?
  • OWASP Kubernetes Top 10

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: