Let’s say I toss a coin 10 times and observe 8 heads and 2 tails. Based on this, if I say that the probability of getting head (let’s represent it by ) in the 11th toss is 0.8 or 80%, most will agree with me. However, often unknown, this is only one of the many possible estimates for . The above estimate is based on an approach known as **Maximum Likelihood Estimate (MLE)**.

The intuition behind the MLE approach is to find one such that maximizes the probability of getting the above observed dataset () i.e. we want to maximize . Since the observed dataset is independent and identically distributed (iid), we can write

For practical purpose (to avoid numerical overflow) rather than maximizing the above equation we take the log of the above equation and maximize it (Since log is monotonically increasing function, it doesn’t change our max estimate).

Above represents number of heads. Now to maximize above equation we take derivative with represent to and equate it to zero. This will give us maximum likelihood estimate for .

Note that the above solution is obtained by maximizing the probability of getting the observed dataset i.e. . However few scholars argue that in reality what we want to maximize is the probability of getting given the observed dataset i.e . Using Bayesian approach this can be expanded as

One advantage of the above approach is that it allows us to include prior knowledge. For instance from past experiences we all know that the probability of getting head or tail of a fair coin is 0.5. But if a coin is tossed 10 times, rarely we get exactly 5 heads and 5 tails. That means we have some kind of distribution for that has a mean around 0.5 and a small variance. Bayesian approach allows us to mathematically incorporate this belief. However the downside of the Bayesian approach is that it quickly becomes lot more difficult to find the closed form solution. Further the challenges are different when trying to solve the above equation analytically and practically.

Theoretically the challenge is computing probability of evidence i.e. . can be represented as . However solving this integration can be impossible unless we assume **conjugate prior**. This will become clearer after we solve the equation 1. For now, as shown below, let’s assume that we represent our prior belief about using beta distribution. The two parameters () of the Beta distribution allows us to model our belief that a fair coin has equal chances of getting head or tail.

By representing our prior using Beta distribution we can rewrite evidence as

Solving the above integration turns out to be simple and results into a constant value that can be represented as . Thus we can rewrite eqn 1. as

Note that the above posterior has the same form as that of prior distribution (i.e. beta distribution) but with different parameters. For posterior we got and . For the given likelihood distribution (here bernoulli) if the choice of the prior distribution (here beta) results in the same form of distribution for the posterior then the choice of prior distribution is known as conjugate prior. For bernoulli distribution, the conjugate prior is given as beta distribution. For gaussian distribution, the conjugate prior is gaussian distribution itself. For the beta distribution, the expected value of parameter is given as

Based on equation 2 and 3, we can calculate expected value of as

Let’s assume , then . Thus, while as per MLE , as per Bayesian estimator . In the case of Bayesian estimation, there are two different forces that are trying to pull in different directions. From eqn 4 we notice that the bigger our hyper parameters (), i.e. the stronger we belief in the prior, the stronger is the pull towards our prior belief. On the other hand the bigger is the sample size N, the stronger is the pull towards the estimate based on the dataset. If N is sufficiently large then bayesian estimate and MLE will converge. Another difference to note is that MLE gives a point estimate (i.e it returns one value), where as Bayesian estimate returns a distribution. For instance above we got a beta distribution with certain alpha and beta parameter.

From the practical perspective, computing probability of evidence is not an issue. It is merely a normalizing constant. From a practical perspective expected value for can be represented as

As per Monte Carlo method, above integral can be approximated by taking large number of samples from probability distribution of and taking the average value for the above integral at these sample points i.e.

However the challenge is how to sample . Random as drawn from the prior distribution might *not* be good representative of the sample space represented by . To overcome this challenge there are many different sampling techniques. Broadly these sampling techniques are referred as **Importance Sampling**. Below is a python example demonstrating one of the simplest importance sampling technique known as **Metropolis Sampling** technique. I leave the discussion for importance sampling for some other time but for now below is the python code to compute for the given problem.

Below I am assume that the prior follows a beta distribution with . Thus the integral that we need to solve is

Any constant can be dropped when using Monte Carlo approach. Hence dropping Beta(2,2) from the above equation, the integral that we need to solve from the Monte Carlo approach perspective is

import random import scipy.stats import scipy.special import matplotlib.pyplot as plt def integral(theta): """ This is likelihood X prior after removing any constant terms. """ return theta**9*(1-theta)**3 n = 100 # Number of points to sample samples = [random.random()] # Random starting point for i in range(n): #Grab last sample point theta = samples[-1] #Create new sample by randomly selecting a point from a normal distribution #of mean = 0 and sd = 0.1. if the new sample is outside of the #domain then ignore it and use existing sample newTheta = theta + random.normalvariate(0, 0.1) if newTheta < 0 or newTheta > 1: newTheta = theta #If the probability of new sample as compared to last sample is less than uniform distribution #then ignore it. acceptanceRatio = integral(newTheta)/integral(theta) if acceptanceRatio > random.random(): # accept only if going uphill samples.append(newTheta) else: samples.append(theta) print "Estimate: ", sum(samples)/n # Plot how sample theta varies with each iteration ylab = [i for i in xrange(len(samples))] pylab.plot(samples, ylab) pylab.title('Random Walk Visualization') pylab.xlabel('Theta Value') pylab.ylabel('Time') pylab.show()

Running the above code gave mean as 0.64. The plot shows sampled at different iteration number.

Note that the above code only sampled 100 points. Typically you sample much larger number of points and also ignore few initial points, known as **burn-in** period, so as to remove influence of starting point from the estimate. If we simply set then the estimate for is 0.71. This is same as we got analytically by applying the bayesian form.

**References and Notes**

One of the best resources I found on this topic is a series of tutorials by Dr. Avinash KaK. The python code is based on the Hilbert’s blog on Monte Carlo Markov Chains.

Apart from MLE and Bayesian approach, another frequently used approach is Maximum A Posterior (MAP). Similar to MLE, MAP gives a point estimate but also allows to incorporate prior beliefs.

Hi Ritesh, the main reason to take the logarithm of the likelihood function isn’t to avoid numerical overflow, but rather to simplify the expression. Since many likelihood functions contain exponentials (e.g. everything in the exponential family) the log simply undoes this exponential and makes the math easier. Also as per your section about Monte Carlo sampling, I’m nearly positive that, for your estimate of the expected value of the posterior, you wouldn’t want to sum $P(\theta_i)$ here. I didn’t read past this. Best, Chris