## The Probabilistic Index for Two Normally Distributed Outcomes

Consider a two-armed study comparing a placebo and treatment. In general, the probabilistic index (PI) is defined as,

and is interpreted as the probability that a subject in the treatment group will have an increased response compared to a subject in the placebo group. The probabilistic index is a particularly useful effect measure for ordinal data, where effects can be difficult to define and interpret owing to absence of a meaningful difference. However, it can also be used for continuous data, noting that when the outcome is continuous, and the PI reduces to .

suggests an increased outcome is equally likely for subjects in the placebo and treatment group, while suggests an increased outcome is more likely for subjects in the treatment group compared to the placebo group, and the opposite is true when .

## Simulation

Suppose and represent the independent outcomes in the placebo and treatment groups, respectively and an increased value of the outcome is the desired response.

We simulate observations from each group such that treatment truly increases the outcome and the variances within each group are equal such that .

# Loading required libraries
library(tidyverse)
library(gridExtra)

# Setting seed for reproducibility
set.seed(12345)

# Simulating data
n_X = n_Y = 50
sigma_X = sigma_Y = 1
mu_X = 5; mu_Y = 7

outcome_X = rnorm(n = n_X, mean = mu_X, sd = sigma_X)
outcome_Y = rnorm(n = n_Y, mean = mu_Y, sd = sigma_Y)

df <- data.frame(Group = c(rep('Placebo', n_X), rep('Treatment', n_Y)),
Outcome = c(outcome_X, outcome_Y))


Examining side-by-side histograms and boxplots of the outcomes within each group, there appears to be strong evidence that treatment increases the outcome as desired. Thus, we would expect a probabilistic index close to 1 as most outcomes in the treatment group appear larger than those of the placebo group.

# Histogram by group
hist_p <- df %>%
ggplot(aes(x = Outcome, fill = Group)) +
geom_histogram(position = 'identity', alpha = 0.75, bins = 10) +
theme_bw() +
labs(y = 'Frequency')

# Boxplot by group
box_p <- df %>%
ggplot(aes(x = Outcome, fill = Group)) +
geom_boxplot() +
theme_bw() +
labs(y = 'Frequency')

# Combine plots
grid.arrange(hist_p, box_p, nrow = 2)


## Motivation

For observed pairs , , the relationship between and can be defined generally as

where and . If we are unsure about the form of , our objective may be to estimate without making too many assumptions about its shape. In other words, we aim to “let the data speak for itself”.

Simulated scatterplot of . Here, and . The true function is displayed in green.

Non-parametric approaches require only that be smooth and continuous. These assumptions are far less restrictive than alternative parametric approaches, thereby increasing the number of potential fits and providing additional flexibility. This makes non-parametric models particularly appealing when prior knowledge about ‘s functional form is limited.

## Estimating the Regression Function

If multiple values of were observed at each , could be estimated by averaging the value of the response at each . However, since is often continuous, it can take on a wide range of values making this quite rare. Instead, a neighbourhood of is considered.

Result of averaging at each . The fit is extremely rough due to gaps in and low frequency at each .

Define the neighbourhood around as for some bandwidth . Then, a simple non-parametric estimate of can be constructed as average of the ‘s corresponding to the within this neighbourhood. That is,

(1)

where

is the uniform kernel. This estimator, referred to as the Nadaraya-Watson estimator, can be generalized to any kernel function (see my previous blog bost). It is, however, convention to use kernel functions of degree (e.g. the Gaussian and Epanechnikov kernels).

The red line is the result of estimating with a Gaussian kernel and arbitrarily selected bandwidth of . The green line represents the true function .

## Motivation

It is important to have an understanding of some of the more traditional approaches to function estimation and classification before delving into the trendier topics of neural networks and decision trees. Many of these methods build on an understanding of each other and thus to truly be a MACHINE LEARNING MASTER, we’ve got to pay our dues. We will therefore start with the slightly less sexy topic of kernel density estimation.

Let be a random variable with a continuous distribution function (CDF) and probability density function (PDF)

Our goal is to estimate from a random sample . Estimation of has a number of applications including construction of the popular Naive Bayes classifier,

## Advent of Code 2017 in R: Day 2

Day 2 of the Advent of Code provides us with a tab delimited input consisting of numbers 2-4 digits long and asks us to calculate its “checksum”. checksum is defined as the sum of the difference between each row’s largest and smallest values. Awesome! This is a problem that is well-suited for base R.

I started by reading the file in using read.delim, specifying header = F in order to ensure that numbers within the first row of the data are not treated as variable names.

When working with short problems like this where I know I won’t be rerunning my code or reloading my data often, I will use file.choose() in my read.whatever functions for speed. file.choose() opens Windows Explorer, allowing you to navigate to your file path.

input <- read.delim(file.choose(), header = F)

# Check the dimensions of input to ensure the data read in correctly.
dim(input)


After checking the dimensions of our input, everything looks good. As suspected, this is a perfect opportunity to use some vectorization via the apply function.

row_diff <- apply(input, 1, function(x) max(x) - min(x))
checksum <- sum(row_diff)
checksum


Et voilà, the answer is 45,972! Continue reading Advent of Code 2017 in R: Day 2