ML Theory: Conditional Probability 101

Venn diagrams are useful tools for visualizing different probability spaces. Consider two events C and D within some sample space S. Suppose C is the event of owning a cat and D is the event of owning a dog. Our Venn diagram might look something like the following for a sample of 450 pet owners.

Venn diagram
Venn diagram of a sample of 450 pet owners. Here, C is the event of an individual owing a cat and D is the event of an individual owing a dog.

In other words, out of a sample of 450 pet owners,

  • 400 owned a cat exclusively,
  • 36 owned a dog exclusively,
  • 9 owned both a cat and a dog, and
  • 5 owned neither a cat nor a dog.

Obviously, the probability of owning a cat,

    \[ Pr(C) = Pr(\text{owning a cat}) = \frac{400 + 9}{450} = 0.91 \]

is much higher than the probability of owning a dog,

    \[ Pr(D) = Pr(\text{owning a dog}) = \frac{36 + 9}{450} = 0.10 \]

as cats are the superior animal. We can also estimate the probability of being one of those people with neither a cat nor a dog as

    \[ Pr(\bar{C \cup D}) = \frac{5}{450} = 0.01 \]

Now, suppose we select a cat owner at random. What is the probability that they are also a dog owner? Well, since we know the individual is a cat owner, we only need to focus on the left hand bubble of our Venn diagram.

The number of individuals of interest to us is therefore 409, which we will denote n_c. Of those 409, only 9 are within the intersection (denoted C \cap D). We will refer to these 9 as n_{C \cap D}. The relevant probability can therefore be estimated as,

    \[ Pr(D|C) = \frac{n_{C \cap D}}{n_{C}} = \frac{9}{409} = 0.02 \]

Pr(D|C) represents the probability an individual owns a dog given they have a cat. This is what we refer to as a conditional probability. When calculating a conditional probability, we use additional information to reduce the size of our probability space. In this example, this meant that we could focus solely on the cat owner portion of the pictured Venn diagram.

In general, for two events A and B, the law of conditional probability states that the probability of observing A given event B can be represented as

    \[ Pr(A|B) = \frac{Pr(A \cap B)}{Pr(B)} \]

This is exactly the formula we were using for the cats and dogs example,

    \[ Pr(D|C) = \frac{Pr(D \cap C)}{Pr(C)} = \frac{9}{450} \times \frac{450}{409} = \frac{9}{409} \]

We often use conditional probability in our everyday lives without thinking about it. For example, if you tested positive for an extremely rare disease, you should be thinking that the test is likely inaccurate rather than you have the disease… or maybe not because you might be freaking out right now. However, your doctor should be! As a result, your doctor would send you for further testing.

ML Theory: Kernel Regression

Motivation

For observed pairs (x_i, y_i), i = 1, \hdots, n, the relationship between x and y can be defined generally as

    \[ y_i = m(x_i) + \varepsilon_i \]

where f(x_i) = E[y_i | x = x_i] and \varepsilon_i \sim^{iid} (0, \sigma^2). If we are unsure about the form of m(\cdot), our objective may be to estimate m(\cdot) without making too many assumptions about its shape. In other words, we aim to “let the data speak for itself”.

Simulated scatterplot of y = f(x) + \epsilon. Here, x \sim Uniform(0, 10) and \epsilon \sim N(0, 1). The true function f(x) = sin(x) is displayed in green.

Non-parametric approaches require only that m(\cdot) be smooth and continuous. These assumptions are far less restrictive than those of alternative parametric approaches, thereby increasing the number of potential fits and providing additional flexibility. This makes non-parametric models particularly appealing when prior knowledge about m(\cdot)‘s functional form is limited.

Estimating the Regression Function

If multiple values of y were observed at each x, f(x) could be estimated by averaging the value of the response at each x. However, since x is often continuous, it can take on a wide range of values making this quite rare. Instead, a neighbourhood of x is considered.

Result of averaging y_i at each x_i. The fit is extremely rough due to gaps in x and low y frequency at each x.

Define the neighbourhood around x as x \pm \lambda for some bandwidth \lambda > 0. Then, a simple non-parametric estimate of m(x) can be constructed as average of the y_i‘s corresponding to the x_i within this neighbourhood. That is,

(1)   \begin{equation*} \hat{f}_{\lambda}(x) = \frac{\sum_{n} \mathbb{I}(|x - x_i| \leq \lambda)~ y_i}{\sum_{n} \mathbb{I}(|x - x_i| \leq \lambda)} = \frac{\sum_n K\left( \frac{x - x_i}{\lambda} \right) y_i}{\sum_n K\left( \frac{x - x_i}{\lambda} \right) } \end{equation*}

where

    \[ K(u) = \begin{cases} \frac{1}{2} & |u| \leq 1 \\ 0  & \text{o.w.} \end{cases} \]

is the uniform kernel. This estimator, referred to as the Nadaraya-Watson estimator, can be generalized to any kernel function K(u) (see my previous blog bost). It is, however, convention to use kernel functions of degree \nu = 2 (e.g. the Gaussian and Epanechnikov kernels).

The red line is the result of estimating f(x) with a Gaussian kernel and arbitrarily selected bandwidth of \lambda = 1.25. The green line represents the true function sin(x).

Kernel and Bandwidth Selection

The implementation of a kernel estimator requires two choices:

  • the kernel, K(u), and
  • the smoothing parameter, or bandwidth, \lambda.

Kernels are often selected based on their smoothness and compactness. We prefer a compact kernel as it ensures that only data local to the point of interest is considered. The optimal choice, under some standard assumptions, is the Epanechnikov kernel. This kernel has the advantages of some smoothness, compactness, and rapid computation.

The choice of bandwidth \lambda is critical to the performance of the estimator and far more important than the choice of kernel. If the smoothing parameter is too small, the estimator will be too rough; but if it is too large, we risk smoothing out important features of the function. In other words, choosing \lambda involves a significant bias-variance trade-off.


\lambda \uparrow ~~~\Rightarrow smooth curve, low variance, high bias
\lambda \downarrow ~~~\Rightarrow rough curve, high variance, low bias

The simplest way of selecting \lambda is to plot \hat{m}_\lambda(x) for a range of different \lambda and pick the one that looks best. The eye can always visualize additional smoothing, but it is not so easy to imagine what a less smooth fit might look like. For this reason, it is recommended that you choose the least smooth fit that does not show any implausible fluctuations.

Kernel regression fits for various values of \lambda.

Cross-Validation Methods

Selecting the amount of smoothing using subjective methods requires time and effort. Automatic selection of \lambda can be done via cross-validation. The cross-validation criterion is

    \[ CV(\lambda) = \frac{1}{n} \sum_{n} \left( y_j - \hat{m}_{\lambda}^{(-j)} x_j \right)^2 \]

where (-j) indicates that point j is left out of the fit. The basic idea is to leave out observation j and estimate m(\cdot) based on the other n-1 observations. \lambda is chosen to minimize this criterion.

True cross-validation is computationally expensive, so an approximation known as generalized cross-validation (GCV) is often used. GCV approximates CV and involves only one non-parametric fit for each \lambda value (compared to CV which requires n fits at each \lambda).

In order to approximate CV, it is important to note that kernel smooths are linear. That is,

    \[ \hat{Y} = \hat{m}_{\lambda}(x) = S_{\lambda} y \]

where S_{\lambda} is an n \times n smoothing matrix. S_{\lambda} is analogous to the hat matrix H in parametric linear models.

    \[ H = X(X^{T}X)^{-1}X^{T} \]

It can be shown that

    \[ CV(\lambda) = \frac{1}{n} \sum_{n} \left[ \frac{y_i - \hat{m}_{\lambda}(x_i)}{1 - s_{ii}(\lambda)} \right]^2 \]

where s_{ii}(\lambda) is the i^{th} diagonal element of S_{\lambda} (hence s_{ii} is analogous to h_{ii}, the leverage of the i^{th} observation). Using the smoothing matrix,

    \[ GCV(\lambda) = \frac{1}{n} \sum_{i=1}^{n} \left[ \frac{y_i - \hat{m}_{\lambda}(x_i)}{1 - \frac{\text{tr}(S_{\lambda})}{n}}\right] \]

where \text{tr}(S_{\lambda}) is the trace of S_{\lambda}. In this sense, GCV is analogous to the influence matrix.

Automatic methods such as CV often work well but sometimes produce estimates that are clearly at odds with the amount of smoothing that contextual knowledge would suggest. It is therefore extremely important to exercise caution when using them and it is recommended that they be used as a starting point.

ML Theory: Kernel Density Estimation

Motivation

It is important to have an understanding of some of the more traditional approaches to function estimation and classification before delving into the trendier topics of neural networks and decision trees. Many of these methods build on an understanding of each other and thus to truly be a MACHINE LEARNING MASTER, we’ve got to pay our dues. We will therefore start with the slightly less sexy topic of kernel density estimation.

Let X be a random variable with a continuous distribution function (CDF) F(x) = Pr(X \leq x) and probability density function (PDF)

    \[f(x) = \frac{d}{dx} F(x)\]

Our goal is to estimate f(x) from a random sample \lbrace X_1, \hdots, X_n \rbrace. Estimation of f(x) has a number of applications including construction of the popular Naive Bayes classifier,

    \[ \hat{Pr}(C = c | X = x_0) = \frac{\hat{\pi}_c \hat{f}_{c}(x_0)}{\sum_{k=1}^{C} \hat{\pi}_{k} \hat{f}_{k}(x_0)} \]

Derivation

The CDF is naturally estimated by the empirical distribution function (EDF)

    \[ \hat{F}(x) = \frac{1}{n} \sum_{i=1}^{n} \mathbb{I}(X_i \leq x)\]

where

    \[ \mathbb{I}(X_i \leq x) = \begin{cases} 1 & X_i \leq x \\ 0 & \text{otherwise} \end{cases} \]

I’m not saying naturally to be a jerk! I know the feeling of reading proof-heavy journal articles that end sections with “extension to the d-dimensional case is trivial”, it’s not fun when it’s not trivial to you. \hat{F}(x) essentially estimates Pr(X \leq x), the probability of X being less than some threshold x, as the proportion of observations in our sample less than or equal to x.

It might seem natural to estimate the density f(x) as the derivative of \hat{F}(x) but \hat{F}(x) is just a collection of mass points and is not continuous. Instead, lets consider a discrete derivative. For some small \lambda > 0,

    \[ \hat{f}(x) = \frac{\hat{F}(x + \lambda) - \hat{F}(x - \lambda)}{2\lambda} \]

This can be re-expressed as

    \[ \hat{f}(x) = \frac{\sum_{i=1}^{n} \mathbb{I}(x - \lambda \leq X_i \leq x + \lambda)}{2n\lambda} \]

since Pr(X \leq b) - Pr(X \leq a) = Pr(a \leq X \leq b) (draw a picture if you need to convince yourself)! Simplifying further,

    \begin{align*} \hat{f}(x) &=\frac{\sum_{i=1}^{n} \mathbb{I}(-\lambda \leq X_i - x \leq \lambda)}{2n\lambda} \\ &= \frac{1}{2n\lambda} \sum_{i=1}^{n} \mathbb{I} \left( |X_i - x| \leq \lambda \right) \\ &= \frac{1}{2n\lambda} \sum_{i=1}^{n} \mathbb{I} \left( \frac{|X_i - x|}{\lambda} \leq 1 \right) \\ &= \frac{1}{n\lambda} \sum_{i=1}^{n} K \left(\frac{X_i - x}{\lambda} \right) \end{align*}

where

    \[ K(u) = \begin{cases} \frac{1}{2} & |u| \leq 1 \\ 0 & |u| > 1 \end{cases} \]

is the uniform density function on [-1, 1]. Mathemagic!

From our derivation, we see that \hat{f}(x) essentially determines the number of observations within a small distance, \lambda, of x. The bandwidth \lambda dictates the size of the window for which X_i are considered. That is, the bandwidth controls the degree of smoothing. The greater the number of observations within this window, the greater \hat{f}(x) is.

Our derived estimate \hat{f}(x) is a special case of what is referred to as a kernel estimator. The general case is

(1)   \begin{equation*} \hat{f}(x) = \frac{1}{n\lambda} \sum_{i=1}^{n} K \left(\frac{X_i - x}{\lambda} \right) \end{equation*}

where K(u) is a kernel function.

Kernel Functions

A kernel function K(u) : \mathbb{R} \rightarrow \mathbb{R} is any function which satisfies

    \[ \int_{-\infty}^{\infty} K(u) du = 1 \]

The kernel function acts as our weighting function, assigning less mass to observations farther from x. This helps to ensure that our fitted curve is smooth.

Non-negative kernels satisfy K(u) \geq 0 for all u and are therefore probability density functions. Symmetric kernels satisfy K(u) = K(-u) for all u. The Gaussian, or Normal, distribution is a popular symmetric, non-negative kernel.

The moments of a kernel are defined as

    \[ \kappa_j(K) = \int_{-\infty}^{\infty} u^{j} K(u) du \]

The order of a kernel, \nu, is defined as the order of the first non-zero moment. For example, if \kappa_1(K) = 0 and \kappa_2(K) > 0, then K is a second-order kernel and \nu = 2. Symmetric non-negative kernels are second-order and hence second-order kernels are the most common in practice.

Other popular kernels include the Epanechnikov, uniform, bi-weight, and tri-weight kernels. The Epanechnikov kernel is considered to be the optimal kernel as it minimizes error. Choice of the bandwidth, however, is often more influential on estimation quality than choice of kernel.

Kernel density estimates for various bandwidths.
Kernel density estimates for various bandwidths. The thick black line represents the optimal bandwidth, \lambda = 0.344. The jagged dotted line is the estimate of f(x) when the bandwidth is halved, \lambda = 0.172. The flatter, bell-shaped curve represents \lambda = 1.5 which clearly oversmooths the data.

Bandwidth Considerations

As noted above, the bandwidth \lambda determines the size of the envelope around x and thus the number of x_i used for estimation. In the case of a Gaussian kernel, \lambda would translate to the standard deviation. For k-nearest neighbours, \lambda would translate to the size of the neighbourhood expressed as the span k/N (k points within the N observation training set).

The infamous bias-variance trade-off must be considered when selecting \lambda. If we choose a small value of \lambda, we consider a smaller number of x_i. This results in higher variance due to smaller sample size but less bias as each x_i will be closer to x. As we increase \lambda, our window size increases and we consider a larger number of x_i. This reduces our variance but our bias will now be higher as we are using x_i that are further from x and thus information that might not be particularly relevant.

In other words, if \lambda is too large we will smooth out important information but if it is too small, our estimate will be too rough and contain unnecessary noise. Choosing \lambda is no easy task and several methods for bandwidth selection have been proposed including cross-validation methods, rules of thumb, and visual inspection.

Personally, I prefer to use cross-validation as a starting point since I try to minimize the effect of my own biases on estimation. However, these methods aren’t perfect and if feasible, I will follow this up with visual inspection to ensure that the CV bandwidth makes sense in the context of my data and problem. I will generally select a slightly rougher fit over a smoother fit as it is easier for the human eye to imagine a smoother fit than a rougher fit!

Properties of the Kernel Density Estimator

The kernel density estimator

    \[ \hat{f}(x) = \frac{1}{n\lambda} \sum_{i=1}^{n} K \left( \frac{X_i - x}{\lambda} \right) \]

has several convenient numerical properties:

  1. If the kernel function K(u) is non-negative, the density estimator \hat{f}(x) is also non-negative.
  2. \hat{f}(x) integrates to one, making it a valid density function when K is non-negative.

    To prove this, let

        \[u = \frac{X_i - x}{\lambda}\]

    Then, via a change-of-variables,

        \begin{align*} \int_x \hat{f}(x) ~dx &= \frac{1}{n} \sum_{i=1}^{n} \int_{-\infty}^{\infty} \frac{1}{\lambda} K \left(\frac{X_i - x}{\lambda} \right)~dx \\ &= \frac{1}{n} \sum_{i=1}^{n} \int_{-\infty}^{\infty} K(u) ~ du \\ &= \frac{1}{n} \sum_{i=1}^{n} 1 \\  &= 1 \end{align*}

  3. The mean of the estimated density is \bar{X}.

    Using the following transformation,

        \[ u = \frac{X_i - x}{\lambda} \Rightarrow x = u\lambda + X_i \]

    and thus

        \begin{align*} E[X] = \int_{x} x \hat{f}(x) ~dx &= \frac{1}{n} \sum_{n} \int_{x} \frac{x}{\lambda} K \left(\frac{X_i - x}{\lambda} \right)~dx \\ &= \frac{1}{n} \sum_{n} \int_{u} (X_i + u\lambda) K(u) ~du \\ &= \frac{1}{n} \sum_{n} X_i \int_{u} K(u) ~du + \frac{1}{n} \sum_{n} \int_{u} u K(u) ~du \end{align*}

    Recall that the kernel function K(u) must integrate to one and that for second-order kernels,

        \[ \kappa_1(K) = \int_{u} u K(u) ~ du = 0 \]

    Therefore,

        \[ \int_{x} x \hat{f}(x) ~dx = \frac{1}{n} \sum_{n} X_i = \bar{X}\]

  4. The variance of the estimated density is \hat{\sigma}^2 + \lambda^{2} \kappa_2(K). That is, the sample variance is inflated by a factor of \lambda^{2} \kappa_2(K) when the density is estimated.

    The variance of X is given by

        \[ Var[X] = E[X^2] - (E[X])^2 \]

    The second moment of the estimated density is

        \begin{align*}  E[X^2] &= \int_{x} x^2 \hat{f}(x) ~dx  = \frac{1}{n} \sum_{n} \int_{x} \frac{x^2}{\lambda} K\left(\frac{X_i - x}{\lambda} \right)~dx \\  &= \frac{1}{n} \sum_n \int_{x} (X_i + u\lambda)^2 K(u)~du \\  &= \frac{1}{n} \sum_{n} X_i^2 + \frac{2}{n} \sum_{n} \lambda X_i \int_{u} u K(u)~du + \frac{1}{n} \sum_{n} \lambda^2 \int_{u} u^2 K(u) ~du \\  &= \frac{1}{n} \sum_{n} X_i^2 + \frac{2}{n} \sum_{n}\lambda  X_i \kappa_1(K) + \lambda^2 \kappa_2(K) \\  &= \frac{1}{n} \sum_{n} X_i^2 + \lambda^2 \kappa_2(K)  \end{align*}

    Thus,

        \begin{align*}  Var[X] &= \frac{1}{n} X_i^2 + \lambda^2 \kappa_2(K) - \left( \frac{1}{n} \sum_{n} X_i \right)^2 \\  &= \left\lbrace \frac{1}{n} \sum_{n} X_i^2 - \left( \frac{1}{n} \sum_{n} X_i \right)^2 \right\rbrace + \lambda^2 \kappa_2(K) \\  &= \hat{\sigma}^2 + \lambda^2 \kappa_2(K)  \end{align*}

Summary

  • The empirical distribution function (EDF) assigns a mass of 1/N to each x_i, resulting in a discrete or “jumpy” estimate.
  • Kernel density estimators (KDE) estimate f(x) by constructing a neighbourhood around the point of interest x. Observations within this neighbourhood are then assigned a mass based on their distance from x via a kernel function, resulting in a smooth estimate.
  • Popular kernel choices are the Gaussian and Epanechnikov kernels. These kernels are second-order kernels, suggesting they are both proper, symmetric density functions.
  • The size of the neighbourhood is dictated by the bandwidth \lambda. Special care must be taken when selecting \lambda in order to ensure that the bias-variance trade-off is balanced for your problem. Different methods such as CV are available to assist you with optimal bandwidth selection.
  • Choice of bandwidth has a larger impact on estimation quality than your choice of kernel.

I will be extending the kernel density estimator to kernel regression in my future blog posts and conducting a case study in R that uses these methods, stay tuned!