Alleviating Uncertainty In Bayesian Inference With MCMC Sampling and Metropolis-Hastings
Statistically speaking most of us can't speak authoritatively about statistics. But Bayesian inference is an important technique these days in the domain of belief systems. It couldn't hurt to familiarize yourself a bit with the inner workings.
Join the DZone community and get the full member experience.Join For Free
bayesian inference is a statistical method used to update a prior belief based on new evidence, an extremely useful technique with innumerable applications. uncertainty about probabilities that are hard to quantify is one of the challenges of bayesian inference, but there is a solution that is exciting for its cross-disciplinary origins and the elegant chain of ideas of which it is composed.
if you’re a statistician or a data scientist you probably know the basics of bayesian inference . if you’re just learning, the mathematical idea is to update the probability of some set of parameters based on measured data. the so-called posterior belief – the updated probability – can be defined as the likelihood of observing the new data multiplied by the prior belief and divided by the overall probability of observing model data, where the prior belief is marginalized. this gets written as there following formula, where θ is the prior belief and data is the new evidence:
to make it concrete, imagine a kid who gets dessert after dinner 4 out of 7 days per week. that's p (θ), the prior belief. of the nights she gets dessert, she ate her entire dinner 2 of 4 nights. that's p ( data |θ), the new evidence. but she eats her entire dinner 3 of 7 nights per week. that's p ( data ), the new evidence with the prior belief marginalized. so, if we execute the formula, the posterior belief is (4/7 * 2/4) / 3/7, and if the little girl finishes her dinner she has a 2/3 likelhood of getting dessert. (the injustice!)
one thing that makes bayesian inference hard is that more often than not the integral in the denominator, also known as the normalizing constant , doesn't have a closed analytic form. it has to be numerically approximated, and it’s hard to do so. you’re estimating the probability of observing something in theory, without any evidence.
this is where the exploit of some techniques developed in very different fields comes in: markov chains, monte carlo methods, and the metropolis-hastings algorithm.
markov chains are a very popular device for modeling stochastic – random – processes in numerous fields. the basic idea is that you can model a random process using a transition matrix where the process transitions from one state to another in some state space over time. the transition matrix lays out the probabilities for these state transitions with the transitions being memoryless i.e., the probability of transitioning from one state to another depends only on the state the process is transitioning from (and not the sequence of states the process took to get to that state). the impact of this is that as the chain evolves over a long enough period of transitions, irrespective of the initial state the process started from, the distribution over the underlying states settles into a stationary distribution. in it's simples form it is represented in this formula, where π is the stationary distribution and τ is the transition matrix of probabilities.
monte carlo methods are widely applied in various fields as well. these methods allow one to approximate the expectation of a function of a random variable using the sample average of the function evaluated at samples drawn from the underlying probability distribution. ν is the number of samples drawn from the probability distribution, ρ :
it turns out, under some very general assumptions, that for every probability distribution there exists a markov chain with that distribution as its stationary distribution. so what if we were to assume that our hard to compute posterior belief (which is a conditional probability distribution) is the stationary distribution of some (unknown) markov chain? if we could invert a markov chain i.e., yield samples from the chain given its stationary distribution and assume the stationary distribution is our posterior, then we have a “magical” way of sampling the posterior without having a closed form for it! the metropolis-hastings algorithm allows us to do just that. it allows us to automatically compute the transition matrix in equation 1 given only the easy to compute numerator terms in equation 2. the normalizing constant is not needed. as the algorithm iterates, its iterations yield samples from our posterior.
how does that help? as we sample the posterior, we can easily do either of the following:
- estimate the posterior probability , e.g.: by constructing a sample histogram using these samples. this histogram approximates the density of the underlying probability distribution.
- compute bayesian predictions . use the monte carlo rule from equation 2 to approximate the expected value of a prediction for a new data point over the entire model parameter space by the sample average of the predictions computed at parameter values sampled from the posterior.
the monte carlo methods were developed in the 1940s in los alamos during world war ii by a team of physicists working on the manhattan project . the metropolis-hastings algorithm was subsequently developed in the 1950s at los alamos by nicolas metropolis working on many-body problems in statistical mechanics and later generalized by w. hastings in 1970s. with the emergence of computing power in the 1980s, there was a rapid surge in the decade to follow in the application of these algorithms to problems in fields ranging from computational biology to finance and business that had proven intractable with other mechanisms. decades-old ideas originally developed in one field have served to make contributions in different areas many years later - evolving bayesian inference via exaptation .
Published at DZone with permission of , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.