EM Algorithm Essentials: Estimating standard errors using the empirical information matrix

Introduction

At the end of my latest blog post, I promised that I would talk about how to perform constrained maximization using unconstrained optimizers. This can be accomplished by employing clever transformation and nice properties of maximum likelihood estimators – see this fantastic post from the now defunct 🙁 Econometrics Beat. Eventually I will get around to discussing more of the statistical details behind this approach and its implementation from scratch in R.

RIGHT NOW, I want to talk about how to obtain standard errors for Gaussian mixture model parameters estimated using the EM algorithm. So far, I’ve written two posts with respect to Gaussian mixtures and the EM algorithm:

  1. Embracing the EM algorithm: One continuous response, which motivates the theory behind the EM algorithm using a two component Gaussian mixture model; and
  2. EM Algorithm Essentials: Maximizing objective functions using R’s optim, which demonstrates how to implement log-likelihood maximization from scratch using the optim function.

Both posts so far have focused only on POINT ESTIMATION! That is, obtaining estimates of our mixture model parameters.

Any estimate we obtain using any statistical method has some uncertainty associated with it. We quantify the uncertainty of a parameter estimate by its STANDARD ERROR. If we repeated the same experiment with comparable samples a large number of times, the standard error would reflect how much our estimates differ from experiment to experiment. Indeed, we would estimate the standard error as the standard deviation of the effect estimates across experiments. If we can understand how our estimates vary across experiments, i.e., estimate their standard errors, we can perform statistical inference by testing hypotheses or constructing confidence intervals, for example.

We usually do not need to conduct all of these experiments because theory tells us what form this distribution of effect estimates takes! We refer to this distribution as the SAMPLING DISTRIBUTION. When the form of the sampling distribution is not obvious, we may use resampling techniques to estimate standard errors. The EM algorithm, however, attempts to obtain maximum likelihood estimates (MLEs) which have theoretical sampling distributions. For well-behaved densities, MLEs can be shown to be asymptotically normally distributed and unbiased with a variance-covariance matrix equal to the inverse of the Fisher Information matrix, i.e., a function of the second derivatives of the log-likelihood or equivalently, the square of the first derivatives…

The tricky bit with the EM algorithm is we don’t maximize the observed log-likelihood directly to estimate Gaussian mixture parameters. Instead, we maximize a more convenient objective function. People much smarter than I have figured out this can give you the same answer. A question remains: if standard errors are estimated using the second derivative of the log-likelihood, but we used the objective function, how do we properly quantify uncertainty? Particularly as the derivatives of the log-likelihood are not straight-forward, for the same reasons that make optimization difficult.

Isaac Meilijson, another person smarter than me, provides us with a wealth of guidance in his 1989 paper titled A Fast Improvement to the EM Algorithm on its Own Terms (I love this title). In this blog post, we summarize and demonstrate how to construct simple estimators of the observed information matrix, and subsequently the standard errors of our EM estimates, when we have independent and identically distributed responses arising from a two-component Gaussian mixture model. Essentially, due to some nice properties, we can simply use the derivatives of our objective function, in lieu of our log-likelihood, to estimate the observed information. Standard errors computed using the observed information are then compared to those obtained using numerical differentiation of the observed log-likelihood.

Continue reading EM Algorithm Essentials: Estimating standard errors using the empirical information matrix

EM Algorithm Essentials: Maximizing objective functions using R’s optim

Overview

In my previous blogpost, I motivated the EM algorithm in the context of estimating the parameters of a two-component Gaussian mixture density. In this case, we can write the estimators of the mixing probability, means, and variance in a nice closed form, and I demonstrated how to implement the corresponding iterative estimation procedure from scratch. Results were then compared to those obtained from the very nice R package flexmix.

However, rarely do we get such nice closed form estimators! We usually need to use numerical methods to maximize our objective function directly. In this blog post, I demonstrate how we can specify our objective function, and use the optim function in R to obtain our parameter estimates. optim has lots of options, and we will cover how to change the optimization procedure and implement restrictions on our parameter spaces.

EM for two-component Gaussian mixture

Let’s quickly recap our motivation, previously discussed in Embracing the EM algorithm: One continuous response.

We randomly sample N patients from a population, and examine the empirical density of their responses. We notice two modes, and based on prior knowledge, hypothesize that the density is actually a mixture of two Gaussian densities. For example, the density centered around greater responses may correspond to “healthy” patients and the other to “ill” patients. We would like to (1) estimate the probability of belonging to each subpopulation; and (2) estimate subpopulation parameters, e.g. mean response in among the healthy and ill. But, we have a problem: we don’t know who belongs to each subpopulation. In other words, subpopulation labels are unobserved or “latent.”

Figure: Observed two-component Gaussian mixture density (purple), and distribution of responses among latent healthy (blue) and ill (red) patient subpopulations.

Figure: Observed two-component Gaussian mixture density (purple), and distribution of responses among latent healthy (blue) and ill (red) patient subpopulations.

We can represent the density of the observed responses as a mixture of the subpopulation densities:

    \[f(y) = \pi~ f_1(y) + (1-\pi)~ f_2(y).\]

That is, individuals belong to the first subpopulation, or are distributed according to density f_{1}(y), according to probability \pi and to the second, distributed per f_2(y), with probability 1-\pi. We assume both densities are Gaussian with respective means \mu_1 and \mu_2 and variances \sigma_1^2 and \sigma_2^2.

Continue reading EM Algorithm Essentials: Maximizing objective functions using R’s optim

Embracing the EM algorithm: One continuous response

Overview

I’m currently working on a project that revolves around the EM algorithm, and am finally realizing the power of this machinery. It really is like that movie with Jim Carrey where he can’t stop seeing the number 23 everywhere, except for me it’s the EM algorithm. Apparently this is called THE BAADER-MEINHOF PHENOMENON, oooh that’s fancy. You’ve probably seen the EM algorithm around too – though perhaps you didn’t know it. It’s commonly used for estimation with missing data. A modified EM algorithm (EMis) is used by the Amelia library in R. The EM algorithm also underpins latent variable models, which makes sense because latent variables are really missing observations when you think about it, right?! The more I learn about statistics, the more I realize most things are really missing data problems… cough potential outcomes cough

Anyways, I was previously taught the EM algorithm using the classic multinomial example. This is a great teaching tool, but I’ve never run into a situation like this in my life (yet). But, I do run into mixture distributions a surprising amount – mostly when investigating heterogeneity within patient populations. There’s a whole textbook on this, see: Medical Applications of Finite Mixture Models. The EM algorithm makes a lot more sense to me in the context of mixture models:

  • We sample a group of patients and observe their response.
  • We notice a bimodal structure in the response distribution.
  • We hypothesize the observed distribution actually corresponds to two subpopulations or “classes.”
  • We don’t know who belongs to which subpopulation.
  • We estimate the probability of latent class membership using the EM algorithm.

Wouldn’t ya know it, this is unsupervised clustering.

In this blog post, I motivate the EM algorithm in the context of a two-component Gaussian mixture model. A thorough walkthrough of the underlying theory is provided. In this case, estimators take a nice closed form, but this is rarely the case for complex problems encountered in practice. R code for implementating the EM algorithm using the closed form estimators is provided. I also demonstrate how this model can be easily fit using the flexmix library.

Figure: A two-component Gaussian mixture density.

Figure: A two-component Gaussian mixture density.

Continue reading Embracing the EM algorithm: One continuous response

Analyze and visualize DAGs in R by integrating the dagR, daggity, and ggdag libraries

Huzzah!!

This blog post makes 3/3 for the year. I made it, folks! I wasn’t sure I would. It was a jam-packed year for us: I turned 30 and finished my Ph.D. at Western (not causal – well, maybe), we sold our first house and moved from Canada to the USA, and I promptly started my postdoc at Harvard. But it was a good year for us, too. We are having fun exploring Boston, I’m enjoying my postdoc work and co-workers, and I can now force my family to address all my Christmas presents to Dr. Emma – worth it. Very much looking forward to what the New Year brings.



Here’s looking at you, New Year! (Left: Dr. Emma, Right: Just Ethan)

D’ya like DAGs?

If so, good news! This blogpost is all about DAGs. Directed acyclic graphs, that is. One of the most popular posts on my site is Using a DAG to simulate data with the dagR library. I think this is really interesting. Generation of data from randomized controlled trials is relatively straightforward, but I imagine this is not the case for observational studies, confounding and all. DAGs represent a way to formalize our assumptions around causal mechanisms, i.e., X causes Y, and this relationship is confounded by Z, for example. As the adoption of causal inference grows (look at any statistical conference programme), it makes sense that we would want to assess novel methods using data generated according to realistic mechanisms. Simulation according to DAGs allows us to do this, or at least attempt to.

There are two things I want to comment on regarding my previous blogpost. First, I’d really like to see some more sophisticated ways of generating data according to DAGs. The dagR library is limited to continuous and binary exposures, which may be good enough for now but won’t be enough in the long run (survival endpoints are very common in medicine, so too are ordinal exposures or outcomes, etc). I find that the way direct effects are specified and defined also makes it challenging to know the “ground truth” in some cases. I’d like to see the packages do more heavy lifting.

I’m starting to see more materials around DAG simulation, like the DagSim framework for Python or papers like Illustrating How to Simulate Data From Directed Acyclic Graphs to Understand Epidemiologic Concepts and DagSim: Combining DAG-based model structure with unconstrained data types and relations for flexible, transparent, and modularized data simulation. There might be some promising nuggets in here already, and movement in this area suggests we should see something promising soon. Also, thank you to the people working on this!

Second, the default diagrams produced by dagR aren’t bad, but they aren’t great. Sorry (see also: thank you above). Writing this blog post, I also noticed that an article on the use of dagR has also been published by the package authors in 2022. Funny enough, they seem to recognize it too – suggesting that the ggdag library in R can be used to improve the graphical layout of DAGs. Hey, wouldn’t you know it – that’s exactly what this blog post is about!! A great feature of the dagR library is that you can convert a dagR object to dagitty object with a single command. So, let’s make your already programmed DAG work for you!

TL;DR

This blog post is a quick tutorial on how to transform dagR objects into dagitty objects so you can construct publication-worthy visualizations of your DAG using the ggdag library.

library(dagR)
library(dagitty)
library(ggdag)
library(tidyverse)

Continue reading Analyze and visualize DAGs in R by integrating the dagR, daggity, and ggdag libraries