Category Archives: Sports Science

Calculating Confidence Intervals in R

A favorite paper of mine is the 1986 paper by Gardner and Altman regarding confidence intervals and estimation as a more useful way of reporting data than a dichotomous p-value:

Gardner, MJ. Altman, DG. (1986). Confidence intervals rather than P values: Estimation rather than hypothesis testing. Brit Med J; 292:746-750.

In this paper, Gardner and Altman discuss three main points for either moving away from or supplementing statistical reporting with p-values:

  1. Research often focuses on null hypothesis significance testing with the goal being to identify statistically significant results.
  2. However, we are often interested in the magnitude of the factor of interest.
  3. Given that research deals with samples of a broader population, the readers are not only interested in the observed magnitude of the estimand but also the degree of variability and plausible range of values for the population. This variability can be quantified using confidence intervals.

Aside from the paper providing a clear explanation of the issue at hand, their appendix offers the equations to calculate confidence intervals for means, differences in means, proportions, and differences in proportions. Thus, I decided to compile the appendix in an R script for those looking to code confidence intervals (and not have to rely on pre-built functions).

All of this code is available on my GITHUB page.

Confidence Intervals for Means and Differences

Single Sample

  1. Obtain the mean
  2. Calculate the standard error (SE) of the mean as SE = SD/sqrt(N)
  3. Multiply by a t-critical value specific to the level of confidence of interest and the degrees of freedom (DF) for the single sample, DF = N – 1

The confidence intervals are calculated as:

Low = mean – t_{crit} * SE
High = mean + t_{crit} * SE

Example

We collect data on 30 participants on a special test of strength and observe a mean of 40 and standard deviation of 10. We want to calculate the 90% confidence interval.

The steps above can easily be computed in R. First, we write down the known info (the data that is provided to us). We then calculate the standard error and degrees of freedom (N – 1). To obtain the critical value for our desired level of interest, we use the t-distribution is specific to the degrees of freedom of our data.

## Known info
N <- 30
avg <- 40
SD <- 10

## Calculate DF and SE
SE <- SD / sqrt(N)
DF <- N - 1

## Calculate the confidence level of interest (90%), the amount of data in each of the the two tails ((1 - level of interest) / 2) and the t-critical value
level_of_interest <- 0.90
tail_value <- (1 - level_of_interest)/2

t_crit <- abs(qt(tail_value, df = DF))
t_crit

## Calculate the 90% CI
low90 <- round(avg - t_crit * SE, 1)
high90 <- round(avg + t_crit * SE, 1)

cat("The 90% Confidence Interval is:", low90, " to ", high90)

Two Samples

  1. Obtain the sample mean and standard deviations for the two samples
  2. Pool the estimate of the standard deviation:s = sqrt(((n_1 – 1)*s^2_1 + n_2 – 1)*s^2_2) / (n_1 + n_2 – 2))
  3. Calculate the SE for the difference:SE_{diff} = s * sqrt(1/n_1 + 1/n_2)
  4. Calculate the confidence interval as:

Low = (x_1 – x_2) – t_{crit} * SE_{diff}
High = (x_1 – x_2) + t_{crit} * SE_{diff}

Example

The example in the paper provides the following info:

  • Blood pressure levels were measured in 100 diabetic and 100 non-diabetic men aged 40-49 years old.
  • Mean systolic blood pressure was 146.4 mmHg (SD = 18.5) in diabetics and 140.4 mmHg (SD = 16.8) in non-diabetics.

Calculate the 95% CI.

## store the known data
N_diabetic <- 100
N_non_diabetic <- 100
diabetic_avg <- 146.4
diabetic_sd <-18.5
non_diabetic_avg <- 140.4
non_diabetic_sd <- 16.8

## calculate the difference in means, the pooled SD, and the SE of diff
group_diff <- diabetic_avg - non_diabetic_avg

pooled_sd <- sqrt(((N_diabetic - 1)*diabetic_sd^2 + (N_non_diabetic - 1)*non_diabetic_sd^2) / (N_diabetic + N_non_diabetic - 2))

se_diff <- pooled_sd * sqrt(1/N_diabetic + 1/N_non_diabetic)

## Calculate the confidence level of interest (95%), the amount of data in each of the the two tails ((1 - level of interest) / 2) and the t-critical value
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest)/2

t_crit <- abs(qt(tail_value, df = N_diabetic + N_non_diabetic - 2))
t_crit

## Calculate the 95% CI
low95 <- round(group_diff - t_crit * se_diff, 1)
high95 <- round(group_diff + t_crit * se_diff, 1)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

Confidence Intervals for Proportions

Single Sample

  1. Obtain the proportion for the population
  2. Calculate the SE of the proportion, SE = sqrt((p * (1-p)) / N)
  3. Obtain the z-critical value from a standard normal distribution for the level of confidence of interest (since the value for a proportion does not depend on sample size as it does for means).
  4. Calculate the confidence interval:

low = p – z_{crit} * SE
high = p + z_{crit} * SE

Example

We observe a basketball player with 80 field goal attempts and a FG% of 39%. Calculate the 90% CI.

## Store the known info
N <- 80
fg_pct <- 0.39

## Calculate SE
se <- sqrt((fg_pct * (1 - fg_pct)) / N)

## Calculate z-critical value for 50% confidence
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest) / 2
z_crit <- qnorm(p = tail_value, lower.tail = FALSE)

## Calculate the 95% CI
low95 <- round(fg_pct - z_crit * se, 3)
high95 <- round(fg_pct + z_crit * se, 3)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

Two Samples

  1. Calculate the difference in proportions between the two groups
  2. Calculate the SE of the difference in proportions:SE_{diff} = sqrt(((p_1 * (1-p_1)) / n_1) + ((p_2 * (1 – p_2)) / n_2))
  3. Calculate the z-critical value for the level of interest
  4. Calculate the confidence interval as:

low = (p_1 – p_2) – (z_{crit} * se_{diff})
high = (p_1 – p_2) + (z_{crit} * se_{diff})

Example of two unpaired samples

The study provides the following table of example data:

data.frame(
  response = c("improvement", "no improvement", "total"),
  treatment_A = c(61, 19, 80),
  treatment_B = c(45, 35, 80)
)

The difference we are interested in is between the proportion who improved in treatment A and the proportion of those who improved in treatment B.

## Obtain the two proportions from the table and total sample sizes
pr_A <- 61/80
n_A <- 80
pr_B <- 45/80
n_B <- 80

## calculate the difference in proportions
diff_pr <- pr_A - pr_B

## Calculate the SE
se_diff <- sqrt((pr_A * (1 - pr_A))/n_A + (pr_B * (1 - pr_B))/n_B)

## Get z-critical value for 95% confidence
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest) / 2
z_crit <- qnorm(p = tail_value, lower.tail = FALSE)

## Calculate the 95% CI
low95 <- round(diff_pr - z_crit * se_diff, 3)
high95 <- round(diff_pr + z_crit * se_diff, 3)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

 

Example for two paired samples

We can organize the data in a table like this:

data.frame(
  test_1 = c("Present", "Present", "Absent", "Absent"),
  test_2 = c("Present", "Absent", "Present", "Absent"),
  number_of_subjects = c("a", "b", "c", "d")
)

Let’s say we measured a group of subjects for a specific disease twice in a study. A subject either has the disease (present) or does not (absent) in the two time points. We observe the following data:

dat <- data.frame(
  test_1 = c("Present", "Present", "Absent", "Absent"),
  test_2 = c("Present", "Absent", "Present", "Absent"),
  number_of_subjects = c(10, 25, 45, 5)
)

dat

## total sample size
N <- sum(dat$number_of_subjects)

If we care about comparing those that had the disease (Present) on both occasions (both Test1 and Test2) we calculate them as:

p_1 = (a + b) / N
p_2 = (a + c) / N
Diff = p_1 – p_2

The SE of the difference is:

SE_{diff} = 1/N * sqrt(b + c – (b-c)^2/N)

## Obtain the info we need from the table
p1 <- (10 + 25) / N
p2 <- (10 + 45) / N
diff_prop <- p1 - p2
diff_prop

## Calculate the SE of the difference
se_diff <- 1 / N * sqrt(25 + 45 - (25+45)^2/N)

## Get z-critical value for 95% confidence
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest) / 2
z_crit <- qnorm(p = tail_value, lower.tail = FALSE)

## Calculate the 95% CI
low95 <- round(diff_prop - z_crit * se_diff, 3)
high95 <- round(diff_prop + z_crit * se_diff, 3)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

As always, all of this code is available on my GITHUB page.

And, if you’d like to read the full paper, you can find it here:

Gardner, MJ. Altman, DG. (1986). Confidence intervals rather than P values: Estimation rather than hypothesis testing. Brit Med J; 292:746-750.

Validity, Reliability, & Responsiveness — A few papers on measurement in sport science

I had the pleasure of speaking at the National Strength and Conditioning Association‘s (NSCA) National Conference this summer and while there I made it a point to attend the Sport Science & Performance Technology Special Interest Group meeting as well.

One thing that immediately stood out to me was the number of questions raised specific to what types of technologies to purchase (e.g. “Which brand of force plates should we buy?”, “Does anyone have a list comparing and contrasting different technologies so that we can determine what would be best for us?”, etc.).

While these are fine questions, I do feel they are a bit like putting the cart before the horse. Before thinking about what technology to purchase, we should spend a good bit of time gaining clarity on the question(s) we are attempting to answer. Once we have a firm understanding of the question we can then begin the process of evaluating whether a technology exists that can help us collect the necessary data to explore that question. In fact, this was the main crux of my lecture at the conference, as I spoke about using the PPDAC Framework in practice (I wrote an article about this framework a couple of years ago).

A force plate, a GPS unit, or an accelerometer won’t solve all of our problems. In fact, depending on our question, they might not solve any of our problems! Moreover, as sport scientists we need to concern ourselves not only with the research question but, also whether the desired technology is useful within our ecological setting. Just because something worked in a controlled lab environment or was valid in a different sport does not mean it will be useful for our sport, or in our setting, or with our athletes, or given our unique constraints.

So, I decided to share a few resources pertaining to measurement theory concepts such as validity, reliability, and responsiveness/sensitivity for those working in the sport science space who are interested in more critical approaches to evaluating the technology we use in practice.

Additionally, for those interested, several years ago I wrote a full R code blog for the last paper above (Swinton et al) ,which can he accessed HERE.

Happy reading!

The High Performance Hockey Podcast Interview

This week, I had the great pleasure of being interviewed by my good friend and colleague Anthony Donskov for his High Performance Hockey Podcast.

Anthony has done a tremendous job for the sports science and strength and conditioning community in his teaching, writing, and podcasting. He brings a wealth of knowledge from both the applied strength coach realm all the way through to his PhD work.

In this podcast interview, Anthony and I discuss:

  • Data analysis
  • The PPDAC Framework for conducting research
  • My criticisms of applied sport science
  • The challenge of measuring hard things and things that matter in applied sport.

Check out the podcast HERE.

R Tips & Tricks: Normalizing test dates & calculating test differences

A friend of mine was downloading some force plate data from the software provider so that he could evaluate test data in a few of his athletes during return to play. The issue he was running into was that the different athletes all had different numbers of tests and different start and end testing times. The software exports the test outputs by date and he was wondering how he could normalize the dates to numeric values (e.g. Test 1, Test 2, etc.) so that he could model the date (since we can’t really use a Date in a regression model).

I’ll be the first to admit that working with dates and times can be an incredible pain in the butt. For reference, I covered the topic of converting Catapult GPS practice duration strings to actual training minutes, HERE. To help him out, I provided a few different solutions depending on the research question. I also add some code for calculating changes in test performance between tests and from each test to baseline.

The full code is available on my GITHUB page.

Loading Packages & Simulating Data

## load packages ----------------------------------------------
library(tidyverse)
library(lubridate)

## Simulate data ----------------------------------------------
set.seed(78)
dat <- tibble(
  
  athlete = rep(c("Tom", "Bob", "Franklin"), times = c(5, 10, 3)),
  test_dates = c(
    seq(as.Date("2023-01-01"), as.Date("2023-01-5"), by = "days"),
    seq(as.Date("2023-02-15"), as.Date("2023-02-24"), by = "days"),
    as.Date(c("2023-01-19", "2023-01-30", "2023-02-26"))
  ),
  jump_height = round(rnorm(n = 18, mean = 28, sd = 2.5), 1)
  
)

dat


We can see that Tom has 5 tests, Bob has 10, and Franklin has only 3. Additionally, Tom and Bob tested every day, consecutively, while Franklin was less compliant and has larger time frames between his tests.

Create a test number

First, let’s normalize the Dates so that they are numeric. Basically, instead of dates we want a value indicating whether the test was test 1, or test 5, or test N. We will do this by creating a row_number() id/counter for each individual athlete.

### Create a test number ------------------------------------------
dat <- dat %>%
  group_by(athlete) %>%
  mutate(test_day = row_number())

dat

Calculating the time between tests

Alternatively, we may not just want to know the test number of each test but we may want to determine the amount of days between each test.

The code to do this is a bit ugly looking so let’s unpack it.

  1. Since we are dealing with dates we use the difftime() function which takes an argument for the two times you are looking to calculate the difference between. Here, we are trying to calculate the difference in time (days) between one date and the date preceding it for each individual athlete.
  2. The difftime() function will produce a to time variable. If we want to make this numeric we need to convert it to a character so we do that with the as.character() function.
  3. Once the variable is a character we use the as.numeric() function to convert it to a numeric value.
  4. Finally, since the first value for each athlete will be an NA, since there is no date preceding the first test, we use the coalesce() function to fill in a 0 value for each of the NA’s, to indicate that this was the first test and thus there was no time between it and any other test.
### Calculate the time between tests -------------------------------
dat <- dat %>%
  group_by(athlete) %>%
  mutate(time_btw_tests = coalesce(as.numeric(as.character(difftime(test_dates, lag(test_dates)))), 0))

dat

Notice that Tom and Bob have 1 day between all of their tests while Franklin’s second test was 11 days after his first and his third test was 27 days after his second.

Calculate the difference in jump height from one test to the next

### Calculate difference in jump height from one day to the next -------------------
dat <- dat %>%
  group_by(athlete) %>%
  mutate(test_to_test_diff = jump_height - lag(jump_height))

dat

Here, we use the lag() function to calculate the difference in one value from the value before it within in the same column. Since we grouped by athlete, which is what we want, their first test will always have an NA, in this new column, since there was no test preceding it.

Calculating the difference in jump height from the baseline test

Finally, we might also be interested to evaluate the performance on each test relative to the athlete’s baseline test. To do this we simply subtract jump_height from the jump_height indexed in row one for each athlete.

### Calculate difference in jump height from each test to the baseline test -------------

dat <- dat %>%
  group_by(athlete) %>%
  mutate(test_to_baseline_diff = jump_height - jump_height[1])

dat

Wrapping Up

Dates and times are always tricky to deal with. Most of the sports technology providers will proved data as dates (or unix timestamps) meaning that we have to do some cleaning of the data to codify the dates as numeric values representing the test number or the days between tests (depending on the research question). Additionally, using lag functions can be helpful for calculating he difference from one test to the next or from each test to the baseline.

The entire code is available on my GITHUB page.

If you have any data cleaning issues that you are dealing with from various sports science technologies, feel free to reach out!

Displaying Tables & Plots Together

A common question that I get asked is for a simple way of displaying tables and plots together in the same one-page report. Most in the sport science space that are new to R will copy and paste their plot and table outputs into a word document and then share that with their colleagues. But, this creates extra work — copying, pasting, making sure you don’t mess up and forget to paste the latest plot, etc. So, today’s blog article will walk through a really easy way to create a single page document for combining tables and plots into a single report, which you can save to PDF or jpeg directly from RStudio. This same approach is also useful for researchers looking to combine tables and plots into a single figure for publication. I’ll show how to do this using both ggarange() and {patchwork}.

As always, the full code is available on my GITHUB page.

Load Libraries and Set a Plotting Theme

### Load libraries
library(tidyverse)
library(ggpubr)
library(gridExtra)
library(patchwork)
library(broom)
library(palmerpenguins)

## set plot theme
theme_set(theme_classic() +
            theme(axis.text = element_text(size = 11, face = "bold"),
                  axis.title = element_text(size = 13, face = "bold"),
                  plot.title = element_text(size = 15),
                  legend.position = "top"))

 

Load Data

We will use the {palmerpenguins} data that is freely available in R.

## load data
data("penguins")
d <- penguins %>%
  na.omit()

 

Build the Plots & Table

First we will build our plots. We are going to create two plots and one table. The table will store the information from a linear regression which regresses bill length on flipper length and penguin sex. The plots will be our visualization of these relationships.

## Create Plots
plt1 <- d %>%
  ggplot(aes(x = flipper_length_mm, y = bill_length_mm)) +
  geom_point(aes(fill = sex),
             size = 4,
             shape = 21,
             color = "black",
             alpha = 0.5) +
  geom_smooth(method = "lm",
              aes(color = sex)) +
  scale_fill_manual(values = c("female" = "green", "male" = "blue")) +
  scale_color_manual(values = c("female" = "green", "male" = "blue")) +
  labs(x = "Flipper Length (mm)",
       y = "Bill Length (mm)",
       title = "Bill Length ~ Flipper Length")


plt2 <- d %>%
  ggplot(aes(x = sex, y = bill_length_mm)) +
  geom_violin(alpha = 0.5,
              aes(fill = sex)) +
  geom_boxplot(width = 0.2) +
  geom_jitter(alpha = 0.5) +
  labs(x = "Sex",
       y = "Bill Length (mm)",
       title = "Bill Length Conditional on Penguin Gender")


## Create table
fit <- d %>%
  lm(bill_length_mm ~ flipper_length_mm + sex, data = .) %>%
  tidy() %>%
  mutate(across(.cols = estimate:statistic,
                ~round(.x, 3)),
         term = case_when(term == "(Intercept)" ~ "Intercept",
                          term == "flipper_length_mm" ~ "Flipper Length (mm)",
                          term == "sexmale" ~ "Male"))

Convert the table into a ggtextable format

Right now the table is in a tibble/data frame format. To get it into a format that is usable within the display grid we will convert it to a ggtextable and use some styling to make it look pretty.

## Build table into a nice ggtextable() to visualize it
tbl <- ggtexttable(fit, rows = NULL, theme = ttheme("blank")) %>%
  tab_add_hline(at.row = 1:2, row.side = "top", linewidth = 2) %>%
  tab_add_hline(at.row = 4, row.side = "bottom", linewidth = 3, linetype = 1)

Display the Table and Plots using ggarrange

We simply add our plot and table elements to the ggarrange() function and get a nice looking report.

## Plots & Table together using ggarange()
ggarrange(plt1, plt2, tbl,
          ncol = 2, nrow = 2)

Display the Table and Plots using patchwork

We can accomplish the same goal using the {patchwork} package. The only trick here is that we can’t pass a ggarrange element into patchwork. We need to convert the table into a tableGrob() to make this work. A tableGrob() is simple a way for us to capture all of the information that is required for the table structure we’d like. Also note that we can pass the same tableGrob() into ggarrange above and it will work.

## Plots & Table together using patchwork
# Need to build the table as a tableGrob() instead of ggtextable
# to make it work with patch work
tbl2 <- tableGrob(fit, rows = NULL, theme = ttheme("blank")) %>%
  tab_add_hline(at.row = 1:2, row.side = "top", linewidth = 2) %>%
  tab_add_hline(at.row = 4, row.side = "bottom", linewidth = 3, linetype = 1)

Now we wrap the tableGrob and our plots into the wrap_plots() function and we are all set!

# now visualize together
wrap_plots(plt1, plt2, tbl2, 
           ncol = 2,
           nrow = 2)

Wrapping Up

Instead of copying and pasting tables and plots into word, try using these two simple approaches to creating a single report page that stores all of the necessary information that you colleagues need to see!

All of the code is available on my GITHUB page.