ML Theory: Conditional Probability 101

Venn diagrams are useful tools for visualizing different probability spaces. Consider two events C and D within some sample space S. Suppose C is the event of owning a cat and D is the event of owning a dog. Our Venn diagram might look something like the following for a sample of 450 pet owners.

Venn diagram
Venn diagram of a sample of 450 pet owners. Here, C is the event of an individual owing a cat and D is the event of an individual owing a dog.

In other words, out of a sample of 450 pet owners,

  • 400 owned a cat exclusively,
  • 36 owned a dog exclusively,
  • 9 owned both a cat and a dog, and
  • 5 owned neither a cat nor a dog.

Obviously, the probability of owning a cat,

    \[ Pr(C) = Pr(\text{owning a cat}) = \frac{400 + 9}{450} = 0.91 \]

is much higher than the probability of owning a dog,

    \[ Pr(D) = Pr(\text{owning a dog}) = \frac{36 + 9}{450} = 0.10 \]

as cats are the superior animal. We can also estimate the probability of being one of those people with neither a cat nor a dog as

    \[ Pr(\bar{C \cup D}) = \frac{5}{450} = 0.01 \]

Now, suppose we select a cat owner at random. What is the probability that they are also a dog owner? Well, since we know the individual is a cat owner, we only need to focus on the left hand bubble of our Venn diagram.

The number of individuals of interest to us is therefore 409, which we will denote n_c. Of those 409, only 9 are within the intersection (denoted C \cap D). We will refer to these 9 as n_{C \cap D}. The relevant probability can therefore be estimated as,

    \[ Pr(D|C) = \frac{n_{C \cap D}}{n_{C}} = \frac{9}{409} = 0.02 \]

Pr(D|C) represents the probability an individual owns a dog given they have a cat. This is what we refer to as a conditional probability. When calculating a conditional probability, we use additional information to reduce the size of our probability space. In this example, this meant that we could focus solely on the cat owner portion of the pictured Venn diagram.

In general, for two events A and B, the law of conditional probability states that the probability of observing A given event B can be represented as

    \[ Pr(A|B) = \frac{Pr(A \cap B)}{Pr(B)} \]

This is exactly the formula we were using for the cats and dogs example,

    \[ Pr(D|C) = \frac{Pr(D \cap C)}{Pr(C)} = \frac{9}{450} \times \frac{450}{409} = \frac{9}{409} \]

We often use conditional probability in our everyday lives without thinking about it. For example, if you tested positive for an extremely rare disease, you should be thinking that the test is likely inaccurate rather than you have the disease… or maybe not because you might be freaking out right now. However, your doctor should be! As a result, your doctor would send you for further testing.

ML Theory: Kernel Regression

Motivation

For observed pairs (x_i, y_i), i = 1, \hdots, n, the relationship between x and y can be defined generally as

    \[ y_i = m(x_i) + \varepsilon_i \]

where f(x_i) = E[y_i | x = x_i] and \varepsilon_i \sim^{iid} (0, \sigma^2). If we are unsure about the form of m(\cdot), our objective may be to estimate m(\cdot) without making too many assumptions about its shape. In other words, we aim to “let the data speak for itself”.

Simulated scatterplot of y = f(x) + \epsilon. Here, x \sim Uniform(0, 10) and \epsilon \sim N(0, 1). The true function f(x) = sin(x) is displayed in green.

Non-parametric approaches require only that m(\cdot) be smooth and continuous. These assumptions are far less restrictive than those of alternative parametric approaches, thereby increasing the number of potential fits and providing additional flexibility. This makes non-parametric models particularly appealing when prior knowledge about m(\cdot)‘s functional form is limited.

Estimating the Regression Function

If multiple values of y were observed at each x, f(x) could be estimated by averaging the value of the response at each x. However, since x is often continuous, it can take on a wide range of values making this quite rare. Instead, a neighbourhood of x is considered.

Result of averaging y_i at each x_i. The fit is extremely rough due to gaps in x and low y frequency at each x.

Define the neighbourhood around x as x \pm \lambda for some bandwidth \lambda > 0. Then, a simple non-parametric estimate of m(x) can be constructed as average of the y_i‘s corresponding to the x_i within this neighbourhood. That is,

(1)   \begin{equation*} \hat{f}_{\lambda}(x) = \frac{\sum_{n} \mathbb{I}(|x - x_i| \leq \lambda)~ y_i}{\sum_{n} \mathbb{I}(|x - x_i| \leq \lambda)} = \frac{\sum_n K\left( \frac{x - x_i}{\lambda} \right) y_i}{\sum_n K\left( \frac{x - x_i}{\lambda} \right) } \end{equation*}

where

    \[ K(u) = \begin{cases} \frac{1}{2} & |u| \leq 1 \\ 0  & \text{o.w.} \end{cases} \]

is the uniform kernel. This estimator, referred to as the Nadaraya-Watson estimator, can be generalized to any kernel function K(u) (see my previous blog bost). It is, however, convention to use kernel functions of degree \nu = 2 (e.g. the Gaussian and Epanechnikov kernels).

The red line is the result of estimating f(x) with a Gaussian kernel and arbitrarily selected bandwidth of \lambda = 1.25. The green line represents the true function sin(x).

Kernel and Bandwidth Selection

The implementation of a kernel estimator requires two choices:

  • the kernel, K(u), and
  • the smoothing parameter, or bandwidth, \lambda.

Kernels are often selected based on their smoothness and compactness. We prefer a compact kernel as it ensures that only data local to the point of interest is considered. The optimal choice, under some standard assumptions, is the Epanechnikov kernel. This kernel has the advantages of some smoothness, compactness, and rapid computation.

The choice of bandwidth \lambda is critical to the performance of the estimator and far more important than the choice of kernel. If the smoothing parameter is too small, the estimator will be too rough; but if it is too large, we risk smoothing out important features of the function. In other words, choosing \lambda involves a significant bias-variance trade-off.


\lambda \uparrow ~~~\Rightarrow smooth curve, low variance, high bias
\lambda \downarrow ~~~\Rightarrow rough curve, high variance, low bias

The simplest way of selecting \lambda is to plot \hat{m}_\lambda(x) for a range of different \lambda and pick the one that looks best. The eye can always visualize additional smoothing, but it is not so easy to imagine what a less smooth fit might look like. For this reason, it is recommended that you choose the least smooth fit that does not show any implausible fluctuations.

Kernel regression fits for various values of \lambda.

Cross-Validation Methods

Selecting the amount of smoothing using subjective methods requires time and effort. Automatic selection of \lambda can be done via cross-validation. The cross-validation criterion is

    \[ CV(\lambda) = \frac{1}{n} \sum_{n} \left( y_j - \hat{m}_{\lambda}^{(-j)} x_j \right)^2 \]

where (-j) indicates that point j is left out of the fit. The basic idea is to leave out observation j and estimate m(\cdot) based on the other n-1 observations. \lambda is chosen to minimize this criterion.

True cross-validation is computationally expensive, so an approximation known as generalized cross-validation (GCV) is often used. GCV approximates CV and involves only one non-parametric fit for each \lambda value (compared to CV which requires n fits at each \lambda).

In order to approximate CV, it is important to note that kernel smooths are linear. That is,

    \[ \hat{Y} = \hat{m}_{\lambda}(x) = S_{\lambda} y \]

where S_{\lambda} is an n \times n smoothing matrix. S_{\lambda} is analogous to the hat matrix H in parametric linear models.

    \[ H = X(X^{T}X)^{-1}X^{T} \]

It can be shown that

    \[ CV(\lambda) = \frac{1}{n} \sum_{n} \left[ \frac{y_i - \hat{m}_{\lambda}(x_i)}{1 - s_{ii}(\lambda)} \right]^2 \]

where s_{ii}(\lambda) is the i^{th} diagonal element of S_{\lambda} (hence s_{ii} is analogous to h_{ii}, the leverage of the i^{th} observation). Using the smoothing matrix,

    \[ GCV(\lambda) = \frac{1}{n} \sum_{i=1}^{n} \left[ \frac{y_i - \hat{m}_{\lambda}(x_i)}{1 - \frac{\text{tr}(S_{\lambda})}{n}}\right] \]

where \text{tr}(S_{\lambda}) is the trace of S_{\lambda}. In this sense, GCV is analogous to the influence matrix.

Automatic methods such as CV often work well but sometimes produce estimates that are clearly at odds with the amount of smoothing that contextual knowledge would suggest. It is therefore extremely important to exercise caution when using them and it is recommended that they be used as a starting point.

ML Theory: Kernel Density Estimation

Motivation

It is important to have an understanding of some of the more traditional approaches to function estimation and classification before delving into the trendier topics of neural networks and decision trees. Many of these methods build on an understanding of each other and thus to truly be a MACHINE LEARNING MASTER, we’ve got to pay our dues. We will therefore start with the slightly less sexy topic of kernel density estimation.

Let X be a random variable with a continuous distribution function (CDF) F(x) = Pr(X \leq x) and probability density function (PDF)

    \[f(x) = \frac{d}{dx} F(x)\]

Our goal is to estimate f(x) from a random sample \lbrace X_1, \hdots, X_n \rbrace. Estimation of f(x) has a number of applications including construction of the popular Naive Bayes classifier,

    \[ \hat{Pr}(C = c | X = x_0) = \frac{\hat{\pi}_c \hat{f}_{c}(x_0)}{\sum_{k=1}^{C} \hat{\pi}_{k} \hat{f}_{k}(x_0)} \]

Derivation

The CDF is naturally estimated by the empirical distribution function (EDF)

    \[ \hat{F}(x) = \frac{1}{n} \sum_{i=1}^{n} \mathbb{I}(X_i \leq x)\]

where

    \[ \mathbb{I}(X_i \leq x) = \begin{cases} 1 & X_i \leq x \\ 0 & \text{otherwise} \end{cases} \]

I’m not saying naturally to be a jerk! I know the feeling of reading proof-heavy journal articles that end sections with “extension to the d-dimensional case is trivial”, it’s not fun when it’s not trivial to you. \hat{F}(x) essentially estimates Pr(X \leq x), the probability of X being less than some threshold x, as the proportion of observations in our sample less than or equal to x.

It might seem natural to estimate the density f(x) as the derivative of \hat{F}(x) but \hat{F}(x) is just a collection of mass points and is not continuous. Instead, lets consider a discrete derivative. For some small \lambda > 0,

    \[ \hat{f}(x) = \frac{\hat{F}(x + \lambda) - \hat{F}(x - \lambda)}{2\lambda} \]

This can be re-expressed as

    \[ \hat{f}(x) = \frac{\sum_{i=1}^{n} \mathbb{I}(x - \lambda \leq X_i \leq x + \lambda)}{2n\lambda} \]

since Pr(X \leq b) - Pr(X \leq a) = Pr(a \leq X \leq b) (draw a picture if you need to convince yourself)! Simplifying further,

    \begin{align*} \hat{f}(x) &=\frac{\sum_{i=1}^{n} \mathbb{I}(-\lambda \leq X_i - x \leq \lambda)}{2n\lambda} \\ &= \frac{1}{2n\lambda} \sum_{i=1}^{n} \mathbb{I} \left( |X_i - x| \leq \lambda \right) \\ &= \frac{1}{2n\lambda} \sum_{i=1}^{n} \mathbb{I} \left( \frac{|X_i - x|}{\lambda} \leq 1 \right) \\ &= \frac{1}{n\lambda} \sum_{i=1}^{n} K \left(\frac{X_i - x}{\lambda} \right) \end{align*}

where

    \[ K(u) = \begin{cases} \frac{1}{2} & |u| \leq 1 \\ 0 & |u| > 1 \end{cases} \]

is the uniform density function on [-1, 1]. Mathemagic!

From our derivation, we see that \hat{f}(x) essentially determines the number of observations within a small distance, \lambda, of x. The bandwidth \lambda dictates the size of the window for which X_i are considered. That is, the bandwidth controls the degree of smoothing. The greater the number of observations within this window, the greater \hat{f}(x) is.

Our derived estimate \hat{f}(x) is a special case of what is referred to as a kernel estimator. The general case is

(1)   \begin{equation*} \hat{f}(x) = \frac{1}{n\lambda} \sum_{i=1}^{n} K \left(\frac{X_i - x}{\lambda} \right) \end{equation*}

where K(u) is a kernel function.

Kernel Functions

A kernel function K(u) : \mathbb{R} \rightarrow \mathbb{R} is any function which satisfies

    \[ \int_{-\infty}^{\infty} K(u) du = 1 \]

The kernel function acts as our weighting function, assigning less mass to observations farther from x. This helps to ensure that our fitted curve is smooth.

Non-negative kernels satisfy K(u) \geq 0 for all u and are therefore probability density functions. Symmetric kernels satisfy K(u) = K(-u) for all u. The Gaussian, or Normal, distribution is a popular symmetric, non-negative kernel.

The moments of a kernel are defined as

    \[ \kappa_j(K) = \int_{-\infty}^{\infty} u^{j} K(u) du \]

The order of a kernel, \nu, is defined as the order of the first non-zero moment. For example, if \kappa_1(K) = 0 and \kappa_2(K) > 0, then K is a second-order kernel and \nu = 2. Symmetric non-negative kernels are second-order and hence second-order kernels are the most common in practice.

Other popular kernels include the Epanechnikov, uniform, bi-weight, and tri-weight kernels. The Epanechnikov kernel is considered to be the optimal kernel as it minimizes error. Choice of the bandwidth, however, is often more influential on estimation quality than choice of kernel.

Kernel density estimates for various bandwidths.
Kernel density estimates for various bandwidths. The thick black line represents the optimal bandwidth, \lambda = 0.344. The jagged dotted line is the estimate of f(x) when the bandwidth is halved, \lambda = 0.172. The flatter, bell-shaped curve represents \lambda = 1.5 which clearly oversmooths the data.

Bandwidth Considerations

As noted above, the bandwidth \lambda determines the size of the envelope around x and thus the number of x_i used for estimation. In the case of a Gaussian kernel, \lambda would translate to the standard deviation. For k-nearest neighbours, \lambda would translate to the size of the neighbourhood expressed as the span k/N (k points within the N observation training set).

The infamous bias-variance trade-off must be considered when selecting \lambda. If we choose a small value of \lambda, we consider a smaller number of x_i. This results in higher variance due to smaller sample size but less bias as each x_i will be closer to x. As we increase \lambda, our window size increases and we consider a larger number of x_i. This reduces our variance but our bias will now be higher as we are using x_i that are further from x and thus information that might not be particularly relevant.

In other words, if \lambda is too large we will smooth out important information but if it is too small, our estimate will be too rough and contain unnecessary noise. Choosing \lambda is no easy task and several methods for bandwidth selection have been proposed including cross-validation methods, rules of thumb, and visual inspection.

Personally, I prefer to use cross-validation as a starting point since I try to minimize the effect of my own biases on estimation. However, these methods aren’t perfect and if feasible, I will follow this up with visual inspection to ensure that the CV bandwidth makes sense in the context of my data and problem. I will generally select a slightly rougher fit over a smoother fit as it is easier for the human eye to imagine a smoother fit than a rougher fit!

Properties of the Kernel Density Estimator

The kernel density estimator

    \[ \hat{f}(x) = \frac{1}{n\lambda} \sum_{i=1}^{n} K \left( \frac{X_i - x}{\lambda} \right) \]

has several convenient numerical properties:

  1. If the kernel function K(u) is non-negative, the density estimator \hat{f}(x) is also non-negative.
  2. \hat{f}(x) integrates to one, making it a valid density function when K is non-negative.

    To prove this, let

        \[u = \frac{X_i - x}{\lambda}\]

    Then, via a change-of-variables,

        \begin{align*} \int_x \hat{f}(x) ~dx &= \frac{1}{n} \sum_{i=1}^{n} \int_{-\infty}^{\infty} \frac{1}{\lambda} K \left(\frac{X_i - x}{\lambda} \right)~dx \\ &= \frac{1}{n} \sum_{i=1}^{n} \int_{-\infty}^{\infty} K(u) ~ du \\ &= \frac{1}{n} \sum_{i=1}^{n} 1 \\  &= 1 \end{align*}

  3. The mean of the estimated density is \bar{X}.

    Using the following transformation,

        \[ u = \frac{X_i - x}{\lambda} \Rightarrow x = u\lambda + X_i \]

    and thus

        \begin{align*} E[X] = \int_{x} x \hat{f}(x) ~dx &= \frac{1}{n} \sum_{n} \int_{x} \frac{x}{\lambda} K \left(\frac{X_i - x}{\lambda} \right)~dx \\ &= \frac{1}{n} \sum_{n} \int_{u} (X_i + u\lambda) K(u) ~du \\ &= \frac{1}{n} \sum_{n} X_i \int_{u} K(u) ~du + \frac{1}{n} \sum_{n} \int_{u} u K(u) ~du \end{align*}

    Recall that the kernel function K(u) must integrate to one and that for second-order kernels,

        \[ \kappa_1(K) = \int_{u} u K(u) ~ du = 0 \]

    Therefore,

        \[ \int_{x} x \hat{f}(x) ~dx = \frac{1}{n} \sum_{n} X_i = \bar{X}\]

  4. The variance of the estimated density is \hat{\sigma}^2 + \lambda^{2} \kappa_2(K). That is, the sample variance is inflated by a factor of \lambda^{2} \kappa_2(K) when the density is estimated.

    The variance of X is given by

        \[ Var[X] = E[X^2] - (E[X])^2 \]

    The second moment of the estimated density is

        \begin{align*}  E[X^2] &= \int_{x} x^2 \hat{f}(x) ~dx  = \frac{1}{n} \sum_{n} \int_{x} \frac{x^2}{\lambda} K\left(\frac{X_i - x}{\lambda} \right)~dx \\  &= \frac{1}{n} \sum_n \int_{x} (X_i + u\lambda)^2 K(u)~du \\  &= \frac{1}{n} \sum_{n} X_i^2 + \frac{2}{n} \sum_{n} \lambda X_i \int_{u} u K(u)~du + \frac{1}{n} \sum_{n} \lambda^2 \int_{u} u^2 K(u) ~du \\  &= \frac{1}{n} \sum_{n} X_i^2 + \frac{2}{n} \sum_{n}\lambda  X_i \kappa_1(K) + \lambda^2 \kappa_2(K) \\  &= \frac{1}{n} \sum_{n} X_i^2 + \lambda^2 \kappa_2(K)  \end{align*}

    Thus,

        \begin{align*}  Var[X] &= \frac{1}{n} X_i^2 + \lambda^2 \kappa_2(K) - \left( \frac{1}{n} \sum_{n} X_i \right)^2 \\  &= \left\lbrace \frac{1}{n} \sum_{n} X_i^2 - \left( \frac{1}{n} \sum_{n} X_i \right)^2 \right\rbrace + \lambda^2 \kappa_2(K) \\  &= \hat{\sigma}^2 + \lambda^2 \kappa_2(K)  \end{align*}

Summary

  • The empirical distribution function (EDF) assigns a mass of 1/N to each x_i, resulting in a discrete or “jumpy” estimate.
  • Kernel density estimators (KDE) estimate f(x) by constructing a neighbourhood around the point of interest x. Observations within this neighbourhood are then assigned a mass based on their distance from x via a kernel function, resulting in a smooth estimate.
  • Popular kernel choices are the Gaussian and Epanechnikov kernels. These kernels are second-order kernels, suggesting they are both proper, symmetric density functions.
  • The size of the neighbourhood is dictated by the bandwidth \lambda. Special care must be taken when selecting \lambda in order to ensure that the bias-variance trade-off is balanced for your problem. Different methods such as CV are available to assist you with optimal bandwidth selection.
  • Choice of bandwidth has a larger impact on estimation quality than your choice of kernel.

I will be extending the kernel density estimator to kernel regression in my future blog posts and conducting a case study in R that uses these methods, stay tuned!

Advent of Code 2017 in R: Day 2

Day 2 of the Advent of Code provides us with a tab delimited input consisting of numbers 2-4 digits long and asks us to calculate its “checksum”. checksum is defined as the sum of the difference between each row’s largest and smallest values. Awesome! This is a problem that is well-suited for base R.

I started by reading the file in using read.delim, specifying header = F in order to ensure that numbers within the first row of the data are not treated as variable names.

When working with short problems like this where I know I won’t be rerunning my code or reloading my data often, I will use file.choose() in my read.whatever functions for speed. file.choose() opens Windows Explorer, allowing you to navigate to your file path.

1
2
3
4
input <- read.delim(file.choose(), header = F)
 
# Check the dimensions of input to ensure the data read in correctly.
dim(input)

After checking the dimensions of our input, everything looks good. As suspected, this is a perfect opportunity to use some vectorization via the apply function.

1
2
3
row_diff <- apply(input, 1, function(x) max(x) - min(x))
checksum <- sum(row_diff)
checksum

Et voilà, the answer is 45,972!

As was the case with Day 1, we are then prompted with a part two. In order to help out a worrisome computer, we now have to find the two evenly divisible numbers within each row, divide them, and add each row’s result.

This is a tad bit trickier but it’s clear we need to work with the modulo operator. We need to identify the two numbers a and b within each row such that a %% b == 0. If a < b, a %% b will just return a so my first thought is that we should sort the rows in ascending order.

1
2
# Sort rows of matrix
input <- t(apply(input, 1, sort))

You can avoid transposing the matrix if you use some helpful packages but per my previous post, I'm trying to stick to base R *sobs quietly*. I used loops to solve this because we need to iterate through each row, comparing each element to every other element. I did try using vectorization here via sapply,

1
2
# Compare all elements in first row of input matrix.
sapply(input[1,], function(x) x %% input[1,] == 0)

but this produces a 16 x 16 matrix for each row with a diagonal that needs to be ignored, on top of which we need to find the one TRUE element and map it back to the two matrix indices. I think I would pursue this method further if there was more than one pair of numbers we were searching for but since we areeeeen't....

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Initialize vector to store each row value
row_val <- c(rep(NA, nrow(input)))
 
# For each row..
for(row in 1:nrow(input)){
 
  # Compare each element to its succeeding elements..
  for(col in 1:(ncol(input) - 1)){
    for(i in 1:(ncol(input) - col)){
 
      # If the modulo is equal to 0, 
      # set the vector element equal to the division result.
      if(input[row, col + i] %% input[row, col] == 0){ 
 
        row_val[row] <- input[row, col + i] / input[row, col]
 
      }
    }
  }
}
 
sum(row_val)

Our sum is 326, the correct answer. I'd love to some alternative solutions to part 2, I feel like there is definitely a lot of optimization that could occur here!

Advent of Code 2017 in R: Day 1

My boyfriend recently introduced me to Advent of Code while I was in one of my “learn ALL of the things!” phases. Every year starting December 1st, new programming challenges are posted daily leading up to Christmas. They’re meant to be quick 5-10 minute challenges, so, wanting to test my programming skills, I figured why not try to do all of them in base R!

I went with base R because I know I can dplyr and stringr my way to victory with some of these challenges. I really want to force myself to really go back to basics and confirm that I have the knowledge to do these things on my own without Hadley Wickham‘s (very much appreciated in any other situation) assistance.

Since I’ve started, I’ve also seen a couple of other bloggers attempt to do these challenges in R so I’m really curious how my solutions will compare to theirs.

The first day of the challenge provides you with a string of numbers and asks you to sum all of the digits that match the next digit in a circular list, i.e. the digit after the last digit is the first digit.

My string was…

1
8231753674683997878179259195565332579493378483264978184143341284379682788518559178822225126625428318115396632681141871952894291898364781898929292614792884883249356728741993224889167928232261325123447569829932951268292953928766755779761837993812528527484487298117739869189415599461746944992651752768158611996715467871381527675219481185217357632445748912726487669881876129192932995282777848496561259839781188719233951619188388532698519298142112853776942545211859134185231768952888462471642851588368445761489225786919778983848113833773768236969923939838755997989537648222217996381757542964844337285428654375499359997792679256881378967852376848812795761118139288152799921176874256377615952758268844139579622754965461884862647423491918913628848748756595463191585555385849335742224855473769411212376446591654846168189278959857681336724221434846946124915271196433144335482787432683848594487648477532498952572515118864475621828118274911298396748213136426357769991314661642612786847135485969889237193822718111269561741563479116832364485724716242176288642371849569664594194674763319687735723517614962575592111286177553435651952853878775431234327919595595658641534765455489561934548474291254387229751472883423413196845162752716925199866591883313638846474321161569892518574346226751366315311145777448781862222126923449311838564685882695889397531413937666673233451216968414288135984394249684886554812761191289485457945866524228415191549168557957633386991931186773843869999284468773866221976873998168818944399661463963658784821796272987155278195355579386768156718813624559264574836134419725187881514665834441359644955768658663278765363789664721736533517774292478192143934318399418188298753351815388561359528533778996296279366394386455544446922653976725113889842749182361253582433319351193862788433113852782596161148992233558144692913791714859516653421917841295749163469751479835492713392861519993791967927773114713888458982796514977717987598165486967786989991998142488631168697963816156374216224386193941566358543266646516247854435356941566492841213424915682394928959116411457967897614457497279472661229548612777155998358618945222326558176486944695689777438164612198225816646583996426313832539918

My first thought was that I would need to separate this string such that each character was the element of an object, either a vector or a list. I kept things simple and started by just copy-pasting the string into R. I could import it as a .txt file or otherwise but I figured that was unnecessary for such a quick problem. I stored the string as a variable named input.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
 
# Split string after each character.
 
input_split <- strsplit(input, "")
 
# As a result, input_split is a list with 1 element:
# a vector containing each character of input as an 
# element. Annoying. Let's unlist() it to extract
# *just* the vector.
 
char_vector <- unlist(input_split)
 
# The problem now is that if we are going to sum
# the elements of our string, we need them to be
# numeric and not characters. Easy enough...
 
num_vector <- as.numeric(char_vector)
 
# Now lets just initialize our sum...
 
num_sum = 0
 
# And use a loop...
 
for(i in 1:length(num_vector)){
 
  # If we have the last element of the input string, 
  # set the next number equal to the first element
  # of the string, else select element i + 1.
  next_num <- ifelse(i = length(num_vector), 
  				num_vector[1],
  				num_vector[i + 1])
 
 
  # If our current element is equal to the next element,
  # update the sum.
  if(num_vector[i] == next_num){
 
  	num_sum = num_sum + num_vector[i]
 
  }
}
 
num_sum

Our sum is 1390 which is correct, huzzah. Once you complete part 1, it surprises us with part 2 which asks us to now consider the digit “halfway around the circular list”. So, if our list is 10 digits long, we would add element 1 and element 6, element 8 and element 3, etc. We can make a really simple modification to our established loop to solve this…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
 
num_sum = 0
skip = length(num_vector)/2
 
for(i in 1:length(num_vector)){
 
  # For the first half of the list we need to
  # add half the length of the vector to the
  # index to get the next number & substract
  # for the second half.
 
  next_num <- ifelse(i <= skip, 
                     num_vector[i + skip],
                     num_vector[i - skip])
 
 
  if(num_vector[i] == next_num){
 
    num_sum = num_sum + num_vector[i]
 
  }
}

Our answer is 1,232 which is correct. Woo.

I know that loops in R are often discouraged due to their speed, or lack thereof. I think I might come back and play with this one again to see how I can implement vectorization here. That being said, I also think it’s important to know which tool is right for the job. With a problem of this size, I think a loop is just fine.

Hello world!

Hello world! 

I started blogging back when I was a Master’s of Applied Statistics student making my way through some very heavy journal articles. It always took me a while to work my way through journal articles as I often found myself wanting to semi-prove all the results in a paper for myself. Having “wasted” an obscene amount of paper and time working these things out, I decided that I would attempt to translate these scribbles into complete notes… and so, Statisticelle was born!

Another big part of my learning process is applying what I’m reading or deriving to an actual data set. Having based my Master’s degree on clinical trial research and now working in industry, I don’t think that I can actually use/discuss the data relevant to my past research or career without breaking a few laws. However, I think this blog presents itself as a great opportunity to delve into the massive collections of open data now available. I’m going to make an effort to provide references to these data sets and any R code I used – even if no one uses it, I’m hoping that putting it out in the open will encourage me to be a more diligent statistician!

It’s been a while since I’ve written any blog posts but I’ve recently found myself staring at my statistics bookshelf, thinking that I really miss delving into new topics and working my way through all the messy bits. I’m not sure that anyone will really read this but on the off-chance that they do, I hope these blog posts are helpful and not a collection of misinformation! I am trying to understand these topics in a way that makes sense to me and I can only hope that I am doing so correctly. If you do find any errors, do not hesitate to let me know.

Anyways, that’s my mission statement in so many words. Part 2 – lets see how this turns out!