Simulating data from two-arm cluster randomized trials (CRTs) and partially-nested individually randomized group treatment trials (IRGTs) using base R

Overview

In a previous blogpost, Comprehending complex designs: Cluster randomized trials, I walked through the nuances and challenges of cluster randomized trials (CRTs). Cluster randomized trials randomize groups of individuals, such as families or clinics, rather than individuals themselves. Cluster randomized trials are used for a variety of reasons, including evaluating the spread of infectious disease within a household or evaluating whether a new intervention is effective or feasible in real-world settings. Participants within the same cluster may share the same environment or care provider, for example, leading to correlated responses. If this intracluster correlation is not accounted for, variances will be underestimated and inference methods will not have the operating characteristics (i.e., type I error) we expect. Linear mixed models represent one approach for obtaining cluster-adjusted estimates, and their application was demonstrated using data from the SHARE cluster trial evaluating different sex ed curriculums (interventions) in schools (clusters).

Individually randomized group treatment trials (IRGTs) are closely related to CRTs, but can require slightly more complex analytic strategies. IRGT designs arise naturally when individuals do not initially belong to a group or cluster, but are individually randomized to receive a group-based intervention or receive treatment through a shared agent. As a result, individuals are independent at baseline, but intracluster correlation can increase with follow-up as individuals interact within their respective group or with their shared agent. IRGTs can be “fully-nested,” meaning that both the control and experimental conditions feature a group-based intervention, or “partially-nested,” meaning that the experimental condition is group-based while the control arm is not. A fully-nested IRGT may be used to compare structured group therapy versus group discussion for mental health outcomes, for example. If both arms feature groups and the same intracluster correlation, analysis of fully-nested IRGTs is practically identical to that of CRTs. In comparison, a partially-nested IRGT may be used to compare group therapy versus individual standard of care or a waitlist control, for example. Analysis of partially-nested IRGTs is more complex because intracluster correlation is only present in one arm, and methods must be adapted to handle heterogeneous covariance or correlation matrices. If fully-nested but arms do not share the same correlation, similar considerations are required.

To provide insight into data generating mechanisms and inference, this blog post demonstrates how to simulate normally distributed outcomes from (1) a two-arm cluster randomized trial and (2) a two-arm, partially-nested individually randomized group treatment trial. I only use base R for data generation, so these approaches can be widely implemented. Simulation of complex trial designs is helpful for sample size calculation and understanding operating characteristics of inference methods in different scenarios, such as small samples. Analysis of the simulated data proceeds using linear mixed models fit by the nlme library. Visualization uses ggplot2.

Continue reading Simulating data from two-arm cluster randomized trials (CRTs) and partially-nested individually randomized
group treatment trials (IRGTs) using base R

Practical inference for win measures via U-statistic decomposition

Introduction

In a previous blogpost, I described how complex estimation of U-statistic variance can be simplified using a “structural component” approach introduced by Sen (1960). The structural component approach is very similar to the leave-one-out jackknife. Essentially, the idea behind both of these approaches is that we decompose the statistic into individual contributions. Here, these are referred to as “structural components,” and in the LOO jackknife, these are referred to as “pseudo-values” or sometimes “pseudo-observations.” Construction of these individual quantities differs conceptually somewhat, but in another blogpost, I discuss their one-to-one relationship for specific cases. We can then take the sample variance of these individual contributions to estimate the variance of the statistic.

Estimators for increasingly popular win measures, including the win probability, net benefit, win odds, and win ratio, are obtained using large-sample two-sample U-statistic theory. Variance estimators are complex for these measures, requiring the calculation of multiple joint probabilities.

Here, I demonstrate how variance estimation for win measures can be practically estimated in two-arm randomized trials using a structural component approach. Results and estimators are provided for the win probability, the net benefit, and the win odds. For simplicity, only a single outcome is considered. However, extension to hierarchical composite outcomes is immediate with use of an appropriate kernel function.

Continue reading Practical inference for win measures via U-statistic decomposition

Nonparametric neighbours: U-statistic structural components and jackknife pseudo-observations for the AUC

Two of my recent blog posts focused on two different, but as we will see related, methods which essentially transform observed responses into a summary of their contribution to an estimate: structural components resulting from Sen’s (1960) decomposition of U-statistics and pseudo-observations resulting from application of the leave-one-out jackknife. As I note in this comment, I think the real value of deconstructing estimators in this way results from the use of these quantities, which in special (but common) cases are asymptotically uncorrelated and identically distributed, to: (1) simplify otherwise complex variance estimates and construct interval estimates, and (2) apply regression methods to estimators without an existing regression framework.

As discussed by Miller (1974), pseudo-observations may be treated as approximately independent and identically distributed random variables when the quantity of interest is a function of the mean or variance, and more generally, any function of a U-statistic. Several other scenarios where these methods are applicable are also outlined. Many estimators of popular “parameters” can actually be expressed as U-statistics. Thus, these methods are quite broadly applicable. A review of basic U-statistic theory and some common examples, notably the difference in means or the Wilcoxon Mann-Whitney test statistic, can be found within my blog post: One, Two, U: Examples of common one- and two-sample U-statistics.

As an example of use case (1), Delong et al. (1988) used structural components to estimate the variances and covariances of the areas under multiple, correlated receiver operator curves or multiple AUCs. Hanley and Hajian-Tilaki (1997) later referred to the methods of Delong et al. (1988) as “the cleanest and most elegant approach to variances and covariances of AUCs.” As an example of use case (2), Andersen & Pohar Perme (2010) provide a thorough summary of how pseudo-observations can be used to construct regression models for important survival parameters like survival at a single time point and the restricted mean survival time.

Now, structural components are restricted to U-statistics while pseudo-observations may be used more generally, as discussed. But, if we construct pseudo-observations for U-statistics, one of several “valid” scenarios, what is the relationship between these two quantities? Hanley and Hajian-Tilaki (1997) provide a lovely discussion of the equivalence of these two methods when applied to the area under the receiver operating characteristic curve or simply the AUC. This blog post follows their discussion, providing concrete examples of computing structural components and pseudo-observations using R, and demonstrating their equivalence in this special case.

Continue reading Nonparametric neighbours: U-statistic structural components and jackknife pseudo-observations for the AUC

Resampling, the jackknife, and pseudo-observations

Resampling methods approximate the sampling distribution of a statistic or estimator. In essence, a sample taken from the population is treated as a population itself. A large number of new samples, or resamples, are taken from this “new population”, commonly with replacement, and within each of these resamples, the estimate of interest is re-obtained. A large number of these estimate replicates can then be used to construct the empirical sampling distribution from which confidence intervals, bias, and variance may be estimated. These methods are particularly advantageous for statistics or estimators for which no standard methods apply or are difficult to derive.

The jackknife is a popular resampling method, first introduced by Quenouille in 1949 as a method of bias estimation. In 1958, jackknifing was both named by Tukey and expanded to include variance estimation. A jackknife is a multipurpose tool, similar to a swiss army knife, that can get its user out of tricky situations. Efron later developed the arguably most popular resampling method, the bootstrap, in 1979 after being inspired by the jackknife.

In Efron’s (1982) book The jackknife, the bootstrap, and other resampling plans, he states,

Good simple ideas, of which the jackknife is a prime example, are our most precious intellectual commodity, so there is no need to apologize for the easy mathematical level.

Despite existing since the 1940’s, resampling methods were infeasible due to the computational power required to perform resampling and recalculate estimates many times. With today’s computing power, the uncomplicated yet powerful jackknife, and resampling methods more generally, should be a tool in every analyst’s toolbox.

Continue reading Resampling, the jackknife, and pseudo-observations