EM Algorithm Essentials: Estimating standard errors using the empirical information matrix

Introduction

At the end of my latest blog post, I promised that I would talk about how to perform constrained maximization using unconstrained optimizers. This can be accomplished by employing clever transformation and nice properties of maximum likelihood estimators – see this fantastic post from the now defunct 🙁 Econometrics Beat. Eventually I will get around to discussing more of the statistical details behind this approach and its implementation from scratch in R.

RIGHT NOW, I want to talk about how to obtain standard errors for Gaussian mixture model parameters estimated using the EM algorithm. So far, I’ve written two posts with respect to Gaussian mixtures and the EM algorithm:

  1. Embracing the EM algorithm: One continuous response, which motivates the theory behind the EM algorithm using a two component Gaussian mixture model; and
  2. EM Algorithm Essentials: Maximizing objective functions using R’s optim, which demonstrates how to implement log-likelihood maximization from scratch using the optim function.

Both posts so far have focused only on POINT ESTIMATION! That is, obtaining estimates of our mixture model parameters.

Any estimate we obtain using any statistical method has some uncertainty associated with it. We quantify the uncertainty of a parameter estimate by its STANDARD ERROR. If we repeated the same experiment with comparable samples a large number of times, the standard error would reflect how much our estimates differ from experiment to experiment. Indeed, we would estimate the standard error as the standard deviation of the effect estimates across experiments. If we can understand how our estimates vary across experiments, i.e., estimate their standard errors, we can perform statistical inference by testing hypotheses or constructing confidence intervals, for example.

We usually do not need to conduct all of these experiments because theory tells us what form this distribution of effect estimates takes! We refer to this distribution as the SAMPLING DISTRIBUTION. When the form of the sampling distribution is not obvious, we may use resampling techniques to estimate standard errors. The EM algorithm, however, attempts to obtain maximum likelihood estimates (MLEs) which have theoretical sampling distributions. For well-behaved densities, MLEs can be shown to be asymptotically normally distributed and unbiased with a variance-covariance matrix equal to the inverse of the Fisher Information matrix, i.e., a function of the second derivatives of the log-likelihood or equivalently, the square of the first derivatives…

The tricky bit with the EM algorithm is we don’t maximize the observed log-likelihood directly to estimate Gaussian mixture parameters. Instead, we maximize a more convenient objective function. People much smarter than I have figured out this can give you the same answer. A question remains: if standard errors are estimated using the second derivative of the log-likelihood, but we used the objective function, how do we properly quantify uncertainty? Particularly as the derivatives of the log-likelihood are not straight-forward, for the same reasons that make optimization difficult.

Isaac Meilijson, another person smarter than me, provides us with a wealth of guidance in his 1989 paper titled A Fast Improvement to the EM Algorithm on its Own Terms (I love this title). In this blog post, we summarize and demonstrate how to construct simple estimators of the observed information matrix, and subsequently the standard errors of our EM estimates, when we have independent and identically distributed responses arising from a two-component Gaussian mixture model. Essentially, due to some nice properties, we can simply use the derivatives of our objective function, in lieu of our log-likelihood, to estimate the observed information. Standard errors computed using the observed information are then compared to those obtained using numerical differentiation of the observed log-likelihood.

Continue reading EM Algorithm Essentials: Estimating standard errors using the empirical information matrix

Pride and Probability

To celebrate pride month, my husband Ethan’s workplace Desire2Learn organized virtual Drag Queen BINGO hosted by the fabulous Astala Vista. Even within the confines of a Zoom meeting, Astala Vista put on a great show!

Astala Vista, previously a self-proclaimed cat *lady* of drag, now a cat *cougar* at 30, demonstrating her “roar” on Zoom!

To keep things interesting (and I’m sure to reduce the odds of winning to keep the show going), different BINGO patterns besides the traditional “5 across” BINGO were used. This included a “4 corners” BINGO and a “cover-all” BINGO. To obtain a cover-all BINGO, all numbers on a traditional 5 by 5 BINGO card must be called (noting that of the 25 spaces on the card, 1 is a free space).

Up until this point, probability had not entered the discussion between my husband and I. However, with the cover-all BINGO, Ethan began wondering how many draws it would take to call a cover-all BINGO.

I became quiet, and my husband thought he had perhaps annoyed me with all of his probability questions. In fact, I was thinking about how I could easily simulate the answer to his question (and the corresponding combinatorics answer)!

First, we need to randomly generate a BINGO card. A BINGO card features five columns, with five numbers each. The exception is the N column which features a FREE space, given to all players. The B column features the numbers 1 through 15, the I column 16 through 30, etc. The numbers in each column are drawn without replacement for each card.

Continue reading Pride and Probability