## U-, V-, and Dupree statistics

To start, I apologize for this blog’s title but I couldn’t resist referencing to the Owen Wilson classic You, Me, and Dupree – wow! The other gold-plated candidate was U-statistics and You. Please, please, hold your applause.

My previous blog post defined statistical functionals as any real-valued function of an unknown CDF, , and explained how plug-in estimators could be constructed by substituting the empirical cumulative distribution function (ECDF) for the unknown CDF . Plug-in estimators of the mean and variance were provided and used to demonstrate plug-in estimators’ potential to be biased.

Statistical functionals that meet the following two criteria represent a special family of functionals known as expectation functionals:

1) is the expectation of a function with respect to the distribution function ; and

2) the function takes the form of a symmetric kernel.

Expectation functionals encompass many common parameters and are well-behaved. Plug-in estimators of expectation functionals, named V-statistics after von Mises, can be obtained but may be biased. It is, however, always possible to construct an unbiased estimator of expectation functionals regardless of the underlying distribution function . These estimators are named U-statistics, with the “U” standing for unbiased.

This blog post provides 1) the definitions of symmetric kernels and expectation functionals; 2) an overview of plug-in estimators of expectation functionals or V-statistics; 3) an overview of unbiased estimators for expectation functionals or U-statistics.

## Using a DAG to simulate data with the dagR library

Directed acyclic graphs (DAGs), and causal graphs in general, provide a framework for making assumptions explicit and identifying confounders or mediators of the relationship between the exposure of interest and outcome that need to be adjusted for in analysis. Recently, I ran into the need to generate data from a DAG for a paper I am writing with my peers Kevin McIntyre and Joshua Wiener. After a quick Google search, I was pleasantly surprised to see there were several options to do so. In particular, the dagR library provides “functions to draw, manipulate, [and] evaluate directed acyclic graphs and simulate corresponding data”.

Besides dagR‘s reference manual, a short letter published in Epidemiology, and a limited collection of examples, I couldn’t find too many resources regarding how to use the functionality provided by dagR. The goal of this blog post is to provide an expository example of how to create a DAG and generate data from it using the dagR library.

To simulate data from a DAG with dagR, we need to:

1. Create the DAG of interest using the dag.init function by specifying its nodes (exposure, outcome, and covariates) and their directed arcs (directed arrows to/from nodes).
2. Pass the DAG from (1) to the dag.sim function and specify the number of observations to be generated, arc coefficients, node types (binary or continuous), and parameters of the node distributions (Normal or Bernoulli).

For this tutorial, we are going to try to replicate the simple confounding/common cause DAG presented in Figure 1b as well as the more complex DAG in Figure 2a of Shier and Platt’s (2008) paper, Reducing bias through directed acyclic graphs.

library(dagR)
set.seed(12345)

## Parametric vs. Nonparametric Approach to Estimations

Parametric statistics assume that the unknown CDF belongs to a family of CDFs characterized by a parameter (vector) . As the form of is assumed, the target of estimation is its parameters . Thus, all uncertainty about is comprised of uncertainty about its parameters. Parameters are estimated by , and estimates are be substituted into the assumed distribution to conduct inference for the quantities of interest. If the assumed distribution is incorrect, inference may also be inaccurate, or trends in the data may be missed.

To demonstrate the parametric approach, consider independent and identically distributed random variables generated from an exponential distribution with rate . Investigators wish to estimate the 75 percentile and erroneously assume that their data is normally distributed. Thus, is assumed to be the Normal CDF but and are unknown. The parameters and are estimated in their typical way by and , respectively. Since the normal distribution belongs to the location-scale family, an estimate of the percentile is provided by,

where is the standard normal quantile function, also known as the probit.

set.seed(12345)
library(tidyverse, quietly = T)

# Generate data from Exp(2)
x <- rexp(n = 100, rate = 2)

# True value of 75th percentile with rate = 2
true <- qexp(p = 0.75, rate = 2)
true

## [1] 0.6931472

# Estimate mu and sigma
xbar <- mean(x)
s    <- sd(x)

# Estimate 75th percentile assuming mu = xbar and sigma = s
param_est <- xbar + s * qnorm(p = 0.75)
param_est

## [1] 0.8792925


The true value of the 75 percentile of is 0.69 while the parametric estimate is 0.88.

Nonparametric statistics make fewer distributions about the unknown distribution , requiring only mild assumptions such as continuity or the existence of specific moments. Instead of estimating parameters of , itself is the target of estimation. is commonly estimated by the empirical cumulative distribution function (ECDF) ,

Any statistic that can be expressed as a function of the CDF, known as a statistical functional and denoted , can be estimated by substituting for . That is, plug-in estimators can be obtained as .

## Motivation

For observed pairs , , the relationship between and can be defined generally as

where and . If we are unsure about the form of , our objective may be to estimate without making too many assumptions about its shape. In other words, we aim to “let the data speak for itself”.

Simulated scatterplot of . Here, and . The true function is displayed in green.

Non-parametric approaches require only that be smooth and continuous. These assumptions are far less restrictive than alternative parametric approaches, thereby increasing the number of potential fits and providing additional flexibility. This makes non-parametric models particularly appealing when prior knowledge about ‘s functional form is limited.

## Estimating the Regression Function

If multiple values of were observed at each , could be estimated by averaging the value of the response at each . However, since is often continuous, it can take on a wide range of values making this quite rare. Instead, a neighbourhood of is considered.

Result of averaging at each . The fit is extremely rough due to gaps in and low frequency at each .

Define the neighbourhood around as for some bandwidth . Then, a simple non-parametric estimate of can be constructed as average of the ‘s corresponding to the within this neighbourhood. That is,

(1)

where

is the uniform kernel. This estimator, referred to as the Nadaraya-Watson estimator, can be generalized to any kernel function (see my previous blog bost). It is, however, convention to use kernel functions of degree (e.g. the Gaussian and Epanechnikov kernels).

The red line is the result of estimating with a Gaussian kernel and arbitrarily selected bandwidth of . The green line represents the true function .