EM Algorithm Essentials: Maximizing objective functions using R’s optim

Overview

In my previous blogpost, I motivated the EM algorithm in the context of estimating the parameters of a two-component Gaussian mixture density. In this case, we can write the estimators of the mixing probability, means, and variance in a nice closed form, and I demonstrated how to implement the corresponding iterative estimation procedure from scratch. Results were then compared to those obtained from the very nice R package flexmix.

However, rarely do we get such nice closed form estimators! We usually need to use numerical methods to maximize our objective function directly. In this blog post, I demonstrate how we can specify our objective function, and use the optim function in R to obtain our parameter estimates. optim has lots of options, and we will cover how to change the optimization procedure and implement restrictions on our parameter spaces.

EM for two-component Gaussian mixture

Let’s quickly recap our motivation, previously discussed in Embracing the EM algorithm: One continuous response.

We randomly sample N patients from a population, and examine the empirical density of their responses. We notice two modes, and based on prior knowledge, hypothesize that the density is actually a mixture of two Gaussian densities. For example, the density centered around greater responses may correspond to “healthy” patients and the other to “ill” patients. We would like to (1) estimate the probability of belonging to each subpopulation; and (2) estimate subpopulation parameters, e.g. mean response in among the healthy and ill. But, we have a problem: we don’t know who belongs to each subpopulation. In other words, subpopulation labels are unobserved or “latent.”

Figure: Observed two-component Gaussian mixture density (purple), and distribution of responses among latent healthy (blue) and ill (red) patient subpopulations.

Figure: Observed two-component Gaussian mixture density (purple), and distribution of responses among latent healthy (blue) and ill (red) patient subpopulations.

We can represent the density of the observed responses as a mixture of the subpopulation densities:

    \[f(y) = \pi~ f_1(y) + (1-\pi)~ f_2(y).\]

That is, individuals belong to the first subpopulation, or are distributed according to density f_{1}(y), according to probability \pi and to the second, distributed per f_2(y), with probability 1-\pi. We assume both densities are Gaussian with respective means \mu_1 and \mu_2 and variances \sigma_1^2 and \sigma_2^2.

Continue reading EM Algorithm Essentials: Maximizing objective functions using R’s optim

Embracing the EM algorithm: One continuous response

Overview

I’m currently working on a project that revolves around the EM algorithm, and am finally realizing the power of this machinery. It really is like that movie with Jim Carrey where he can’t stop seeing the number 23 everywhere, except for me it’s the EM algorithm. Apparently this is called THE BAADER-MEINHOF PHENOMENON, oooh that’s fancy. You’ve probably seen the EM algorithm around too – though perhaps you didn’t know it. It’s commonly used for estimation with missing data. A modified EM algorithm (EMis) is used by the Amelia library in R. The EM algorithm also underpins latent variable models, which makes sense because latent variables are really missing observations when you think about it, right?! The more I learn about statistics, the more I realize most things are really missing data problems… cough potential outcomes cough

Anyways, I was previously taught the EM algorithm using the classic multinomial example. This is a great teaching tool, but I’ve never run into a situation like this in my life (yet). But, I do run into mixture distributions a surprising amount – mostly when investigating heterogeneity within patient populations. There’s a whole textbook on this, see: Medical Applications of Finite Mixture Models. The EM algorithm makes a lot more sense to me in the context of mixture models:

  • We sample a group of patients and observe their response.
  • We notice a bimodal structure in the response distribution.
  • We hypothesize the observed distribution actually corresponds to two subpopulations or “classes.”
  • We don’t know who belongs to which subpopulation.
  • We estimate the probability of latent class membership using the EM algorithm.

Wouldn’t ya know it, this is unsupervised clustering.

In this blog post, I motivate the EM algorithm in the context of a two-component Gaussian mixture model. A thorough walkthrough of the underlying theory is provided. In this case, estimators take a nice closed form, but this is rarely the case for complex problems encountered in practice. R code for implementating the EM algorithm using the closed form estimators is provided. I also demonstrate how this model can be easily fit using the flexmix library.

Figure: A two-component Gaussian mixture density.

Figure: A two-component Gaussian mixture density.

Continue reading Embracing the EM algorithm: One continuous response

Analyze and visualize DAGs in R by integrating the dagR, daggity, and ggdag libraries

Huzzah!!

This blog post makes 3/3 for the year. I made it, folks! I wasn’t sure I would. It was a jam-packed year for us: I turned 30 and finished my Ph.D. at Western (not causal – well, maybe), we sold our first house and moved from Canada to the USA, and I promptly started my postdoc at Harvard. But it was a good year for us, too. We are having fun exploring Boston, I’m enjoying my postdoc work and co-workers, and I can now force my family to address all my Christmas presents to Dr. Emma – worth it. Very much looking forward to what the New Year brings.



Here’s looking at you, New Year! (Left: Dr. Emma, Right: Just Ethan)

D’ya like DAGs?

If so, good news! This blogpost is all about DAGs. Directed acyclic graphs, that is. One of the most popular posts on my site is Using a DAG to simulate data with the dagR library. I think this is really interesting. Generation of data from randomized controlled trials is relatively straightforward, but I imagine this is not the case for observational studies, confounding and all. DAGs represent a way to formalize our assumptions around causal mechanisms, i.e., X causes Y, and this relationship is confounded by Z, for example. As the adoption of causal inference grows (look at any statistical conference programme), it makes sense that we would want to assess novel methods using data generated according to realistic mechanisms. Simulation according to DAGs allows us to do this, or at least attempt to.

There are two things I want to comment on regarding my previous blogpost. First, I’d really like to see some more sophisticated ways of generating data according to DAGs. The dagR library is limited to continuous and binary exposures, which may be good enough for now but won’t be enough in the long run (survival endpoints are very common in medicine, so too are ordinal exposures or outcomes, etc). I find that the way direct effects are specified and defined also makes it challenging to know the “ground truth” in some cases. I’d like to see the packages do more heavy lifting.

I’m starting to see more materials around DAG simulation, like the DagSim framework for Python or papers like Illustrating How to Simulate Data From Directed Acyclic Graphs to Understand Epidemiologic Concepts and DagSim: Combining DAG-based model structure with unconstrained data types and relations for flexible, transparent, and modularized data simulation. There might be some promising nuggets in here already, and movement in this area suggests we should see something promising soon. Also, thank you to the people working on this!

Second, the default diagrams produced by dagR aren’t bad, but they aren’t great. Sorry (see also: thank you above). Writing this blog post, I also noticed that an article on the use of dagR has also been published by the package authors in 2022. Funny enough, they seem to recognize it too – suggesting that the ggdag library in R can be used to improve the graphical layout of DAGs. Hey, wouldn’t you know it – that’s exactly what this blog post is about!! A great feature of the dagR library is that you can convert a dagR object to dagitty object with a single command. So, let’s make your already programmed DAG work for you!

TL;DR

This blog post is a quick tutorial on how to transform dagR objects into dagitty objects so you can construct publication-worthy visualizations of your DAG using the ggdag library.

library(dagR)
library(dagitty)
library(ggdag)
library(tidyverse)

Continue reading Analyze and visualize DAGs in R by integrating the dagR, daggity, and ggdag libraries

Minor musings on measurement

This is the life story before a recipe that you can skip

Last year I hit my New Year’s resolution of writing 3 blog posts, but just barely. I think this year is going to be very similar. I’ve written 1 out of 3 blog posts, ahhh! Mid-November means I better get hustling.

There is a lot of writing I do for my research that never ends up seeing the light of day. And that’s a shame because the perfectionist in me agonized over it just the same. A lot of this cast-aside writing is the result of trying to teach myself about topics or methods that are already well-accepted and known. So, nothing novel that deserves publication, really… 

… but it makes great blog post material! It seems to be reaching people as well, and that is cool. From my blog analytics, I can see that certain posts seem to be particularly handy from mid-October to early November. Hmmm… intro statistics, machine learning, or math stats midterms, methinks? Anyhow, here’s another lil’ ditty along those same lines. Yes, I am phoning in these last few blog posts of the year.

Overview

This blog post is about how we translate real-world phenomena into numbers that we can gain insight from. In other words, measurement! I think that measurement is both widely taken for granted and still hotly debated on several fronts. Measurement is something that has been around for much of human history in different forms, yet also something that is presently at the forefront of modern psychological and medical sciences (think patient-reported outcome measures, etc.) An interesting topic with a rich history, to say the least.

I’m going to focus specifically on different ways of classifying measurements that we commonly see in statistics. These classifications are not always widely accepted and, again, the source of debate in some cases. I also probably won’t get everything right. But it’s useful to think about what implications measurement carries for analysis, interpretation, and beyond. Hopefully, this inspires you to dig deeper into this topic! And if you disagree, promptly locate the comments section at the bottom of this page.

Continue reading Minor musings on measurement