Category Archives: R Tips & Tricks

Neural Networks….It’s regression all the way down!

Yesterday, I talked about how t-test and ANOVA are fundamentally just linear regression. But what about something more complex? What about something like a neural network?

Whenever people bring up neural networks I always say, “The most basic neural network is a sigmoid function. It’s just logistic regression!!” Of course, neural networks can get very complex and there is a lot more than can be added to them to maximize their ability to do the job. But fundamentally, they look like regression models and  when you add several hidden layers (deep learning) you end up just stacking a bunch of regression models on top of each other (I know I’m over simplifying this a little bit).

Let’s see if we can build a simple neural network to prove it. As always, you can access the full code on my GITHUB page.

Loading packages, functions, and data

We will load {tidyverse} for data cleaning, {neuralnet} for building our neural network, and {mlbench} to access the Boston housing data.

I create a z-score function that will be used to standardize the features for our model. We will keep this simple and attempt to predict Boston housing prices (mdev) using three features (rm, dis, indus). To read more about what these features are, in your R console type ?BostonHousing. Once we’ve selected those features out of the data set, we apply our z-score function to them.

## Load packages
library(tidyverse)
library(neuralnet)
library(mlbench)

## z-score function
z_score <- function(x){
  z <- (x - mean(x, na.rm = TRUE)) / sd(x, na.rm = TRUE)
  return(z)
}

## get data
data("BostonHousing")

## z-score features
d <- BostonHousing %>%
  select(medv, rm, dis, indus) %>%
  mutate(across(.cols = rm:indus,
                ~z_score(.x),
                .names = "{.col}_z"))
  
d %>% 
  head()

Train/Test Split

There isn’t much of a train/test split here because I’m not building a full model to be tested. I’m really just trying to show how a neural network works. Thus, I’ll select the first row of data as my “test” set and retain the rest of the data for training the model.

## remove first observation for making a prediction on after training
train <- d[-1, ]
test <- d[1, ]

Neural Network Model

We build a simple model with 1 hidden layer and then plot the output. In the plot we see various numbers. The numbers in black refer to our weights and the numbers in blue refer to the biases.

## Simple neural network with one hidden layer
set.seed(9164)
fit_nn <- neuralnet(medv ~ rm_z + dis_z + indus_z,
                    data = train,
                    hidden = 1,
                    err.fct = "sse",
                    linear.output = TRUE)

## plot the neural network
plot(fit_nn)

Making Predictions — It’s linear regression all the way down!

As stated above, we have weights (black numbers) and biases (blue numbers). If we are trying to frame up the neural network as being a bunch of stacked together linear regressions then we can think about the weights as functioning like regression coefficients and the biases functioning like the linear model intercept.

Let’s take each variable from the plot and store them in their own elements so that we can apply them directly to our test observation and write out the equation by hand.

## Predictions are formed using the weights (black) and biases
# Store the weights and biases from the plot and put them into their own elements
rm_weight <- 1.09872
dis_weight <- -0.05993
indus_weight <- -0.49887

# There is also a bias in the hidden layer
hidden_weight <- 35.95032

bias_1 <- -1.68717
bias_2 <- 14.85824

With everything stored, we are ready to make a prediction on the test observations

We begin at the input layer by multiplying each z-scored value by the corresponding weight from the plot above. We sum those together and add in the first bias — just like we would with a linear regression.

# Start by applying the weights to their z-scored values, sum them together and add
# in the first bias
input <- bias_1 + test$rm_z * rm_weight + test$dis_z * dis_weight + test$indus_z * indus_weight
input

One regression down, one more to go! But before we can move to the next regression, we need to transform this input value. The neural network is using a sigmoid function to make this transformation as the input value moves through the hidden layer. So, we should apply the sigmoid function before moving on.

# transform input -- the neural network is using a sigmoid function
input_sig <- 1/(1+exp(-input))
input_sig


We take this transformed input and multiply it by the hidden weight and then add it to the second bias. This final regression equation produces the predicted value of the home.

prediction <- bias_2 + input_sig * hidden_weight
prediction


The prediction here is in the thousands, relative to census data from the 1970’s.

Let’s compare the prediction we got by hand to what we get when we run the predict() function.

## Compare the output to what we would get if we used the predict() function
predict(fit_nn, newdata = test)

Same result!

Again, if you’d like the full code, you can access it on by GITHUB page.

Wrapping Up

Yesterday we talked about always thinking regression whenever you see a t-test or ANOVA. Today, we learn that we can think regression whenever we see a neural network, as well! By stacking two regression-like equations together we produced a neural network prediction. Imagine if we stacked 20 hidden layers together!

The big take home, though, is that regression is super powerful. Fundamentally, it is the workhorse that helps to drive a number of other machine learning approaches. Imagine if you spent a few years really studying regression models? Imagine what you could learn about data analysis? If you’re up for it, one of my all time favorite books on the topic is Gelman, Hill, and Vehtari’s Regression and Other Stories. I can’t recommend it enough!

Confidence Intervals vs Prediction Intervals – A Frequentist & Bayesian Example

We often construct models for the purpose of estimating some future value or outcome. While point estimates for a forecasting future outcomes are interesting it is important to remember that the future contains a lot of uncertainty. As such, a reflection of this uncertainty is often conveyed using confidence intervals (which make a statement about the uncertainty of a population mean) or prediction intervals (which make a statement about a new/future observation for an individual within the population).

This tutorial will walk through how to calculate confidence intervals and prediction intervals by hand and then show the corresponding R functions for obtaining these values. We will finish by building the same model using a Bayesian framework and calculate the highest posterior density intervals and prediction intervals.

All of code is accessible through my GITHUB page.

Loading Packages & Data

We will use the Lahman baseball data set. For the purposes of this example, we will constrain our data to all MLB seasons after 2009 and players who had at least 250 At Bats in those seasons.

library(tidyverse)
library(broom)
library(Lahman)

theme_set(theme_minimal())

data(Batting)

df <- Batting %>%
  filter(yearID >= 2010,
         AB >= 250)

 

EDA

Let’s explore the relationship between Hits (H) and Runs Batted In (RBI) in our data set.

 

cor.test(df$H, df$RBI)

We see a very positive correlation suggesting that as a player gets more hits they tend to also have more RBI’s.

A Simple Regression Model

We will build a simple model that regresses RBI’s on Hits.

rbi_fit <- df %>%
  lm(RBI ~ H, data = .)

tidy(rbi_fit)
confint(rbi_fit)

  • Using tidy() from the broom package, we see that for every additional Hit we estimate a player to increase their RBI, on average, by 0.482.
  • We use the confint() function to produce the confidence intervals around the model coefficients.

 

Estimating Uncertainty

We can use the above model to forecast the expected number of RBI’s for a player given a number of hits.

For example, a player that has 100 hits would be estimated to produce approximately 49 RBI’s.

 

hits <- 100
pred_rbi <-  round(rbi_fit$coef[1] + rbi_fit$coef[2]*hits, 1)
pred_rbi

49 RBI’s is rather precise. There is always going to be some uncertainty around a number like this. For example, a player could be a good hitter but play on a team with poor hitters who don’t get on base thus limiting the possibility for all the hits he is getting to lead to RBI’s.

There are two types of questions we may choose to answer around our prediction:

  1. Predict the average RBI’s for players who get 100 hits.
  2. Predict the average RBI’s for a single player who gets 100 hits.

The point estimate, which we calculated above, will be the same for these two questions. Where they will differ is in the uncertainty we have around that estimate. Question 1 is a statement about the population (the average RBIs for all players who had 100 hits) while question 2 is a statement about an individual (the average RBI’s for a single player). Consequently, the confidence interval that we calculate to estimate our uncertainty for question 1 will be smaller than the prediction interval that we calculate to estimate our uncertainty for question 2.

Necessary Information for Computing Confidence Intervals & Prediction Intervals

In order to calculate the confidence interval and prediction interval by hand we need the following pieces of data:

  1. The model degrees of freedom.
  2. A t-critical value corresponding to the level of uncertainty we are interested in (here we will use 95%).
  3. The average number of hits (our independent variable) observed in our data.
  4. The standard deviation of hits observed in our data.
  5. The residual standard error of our model.
  6. The total number of observations in our data set.

 

## Model degrees of freedom
model_df <- rbi_fit$df.residual

## t-critical value for a 95% level of certainty
level_of_certainty <- 0.95
alpha <- 1 - (1 - level_of_certainty) / 2
t_crit <- qt(p = alpha, df = model_df)
t_crit

## Average &amp; Standard Deviation of Hits
avg_h <- mean(df$H)
sd_h <- sd(df$H)

## Residual Standard Error of the model and Total Number of Observations
rse <- sqrt(sum(rbi_fit$residuals^2) / model_df)
N <- nrow(df)

 

Calculating the Confidence Interval

We can calculate the 95% confidence level by hand using the following equation:

CL95 = t.crit * rse * sqrt(1/N + ((hits – avg.h)^2) / ((N – 1) * sd.h^2))

 

## Calculate the 95% Confidence Limits
cl_95 <- t_crit * rse * sqrt(1/N + ((hits - avg_h)^2) / ((N-1) * sd_h^2))
cl_95

## 95% Confidence Interval
low_cl_95 <- round(pred_rbi - cl_95, 1)
high_cl_95 <- round(pred_rbi + cl_95, 1)

paste("100 Hits =", pred_rbi, "±", low_cl_95, "to", high_cl_95, sep = " ")

If we don’t want to calculate the Confidence Interval by hand we can simply use the predict() function in R. We obtain the same result.

 

round(predict(rbi_fit, newdata = data.frame(H = hits), interval = "confidence", level = 0.95), 1)

Calculating the Prediction Interval

Notice below that the prediction interval uses the same equation as the confidence interval with the exception of adding 1 before 1/N. As such, the prediction interval will be wider as there is more uncertainty when attempting to make a prediction for a single individual within the population.

PI95 = t.crit * rse * sqrt(1 + 1/N + ((hits – avg.h)^2) / ((N – 1) * sd.h^2))

 

## Calculate the 95% Prediction Limit
pi_95 <- t_crit * rse * sqrt(1 + 1/N + ((hits - avg_h)^2) / ((N-1) * sd_h^2))
pi_95

## Calculate the 95% Prediction interval
low_pi_95 <- round(pred_rbi - pi_95, 1)
high_pi_95 <- round(pred_rbi + pi_95, 1)

paste("100 Hits =", pred_rbi, "±", low_pi_95, "to", high_pi_95, sep = " ")

Again, if we don’t want to do the math by hand, we can simply use the the predict() function and change the interval argument from confidence to prediction.

round(predict(rbi_fit, newdata = data.frame(H = hits), interval = "prediction", level = 0.95), 1)

Visualizing the 95% Confidence Interval and Prediction Interval

Instead of deriving the point estimate, confidence interval, and prediction interval for a single observation of hits, let’s derive them a variety of number of hits and then plot the intervals over our data to see what they look like.

We could simply use ggplot2 to draw a regression line with 95% confidence intervals over top of our data.

Notice how narrow the 95% confidence interval is around the regression line. The lightly grey shaded region is barely visible because the interval is so tight.

While plotting the data like this is simple, it might help us understand what is going on better if we constructed our own data frame of predictions and intervals and then overlaid those onto the original data set.

We begin by creating a data frame that has a single column, H, representing the range of hits observed in our data set. We then can create predictions and our 95% Prediction and Confidence Intervals across the vector of hits data that we just created.

Similar to our first plot, we see that the 95% confidence interval (green) is very tight to the regression line (black). Also, the prediction interval (light blue) is substantially larger than the confidence interval across the entire range of values.

 

A Bayesian Perspective

We now take a Bayesian approach to constructing intervals. First, we need to specify our regression model. To do so I will be using the rethinking package which serves as a supplement to Richard McElreath’s brilliant textbook, Statistical Rethinking: A Bayesian Course with Examples in R and Stan.

To create our Bayesian model we will need to specify a few priors. Before specifying these, however, I am going to mean center the independent variable, Hits, and create a new variable called hits_c. Doing so helps with interpretation of the model output and it should be noted that the intercept will now reflect the expected value of RBIs when there is an average number of hits (which would be hits_c = 0).

For my priors:

  • Intercept = N(70, 25) — represents my prior belief of the number of RBI’s when a batter has an average number of hits.
  • Beta Coefficient = N(0, 20) — This prior represent the amount of change in RBI for a unit of change in a batter’s hits relative to the average number of hits for the population.
  • Sigma = Unif(0, 20) — this represents my prior for the model standard error.

 

library(rethinking)

## What is the average number of hits?
mean(df$H)

## Mean center hits
df <- df %>%
  mutate(hits_c = H - mean(H))

## Fit the linear model and specify the priors
rbi_bayes <- map(
  alist(
    RBI ~ dnorm(mu, sigma),     # dependent variable (RBI)
    mu <- a + b*hits_c,				  # linear model regressing RBI on H
    a ~ dnorm(70, 25),          # prior for the intercept
    b ~ dnorm(0, 20),           # prior for the beta coefficient
    sigma ~ dunif(0, 20)),      # prior for the model standard error
  data = df)

## model output
precis(rbi_bayes)

Predict the number of RBI for our batter with 100 hits

The nice part of the Bayesian framework is that we can sample from the posterior distribution of our model and build simulations.

We start by extracting samples from our model fit. We can also then plot these samples and get a clearer understanding of the certainty we have around our model output.

 

coef_sample <-extract.samples(rbi_bayes)
coef_sample[1:10, ]

## plot the beta coefficient
hist(coef_sample$b,
     main = "Posterior Samples of Hits Centered Coefficient\nFrom Bayesian Model",
     xlab = "Hits Centered Coefficient")

 

Our batter had 100 hits. Remember, since our model independent variable is now centered to the population mean we will need to center the batter’s 100 hits to that value prior to using our model to predict the number of RBI’s we’d expect. We will then plot the full sample of the predicted number of RBI’s to reflect our uncertainty around the batter’s performance.

 

## Hypothetical batter with 100 hits
hits <- 100
hits_centered <- hits - mean(df$H)

## Predict RBI total from the model
rbi_sample <- coef_sample$a + coef_sample$b * hits_centered

## Plot the posterior sample
hist(rbi_sample,
     main = "Posterior Prediction of RBI for batter with 100 hits",
     xlab = "Predicted RBI Total")

 

Highest Density Posterior Interval (HDPI)

We now create an interval around the RBI samples. Bayesians often discuss intervals around a point estimate under various different names (e.g., Credible Intervals, Quantile Intervals, Highest Density Posterior Intervals, etc.). Here, we will use Richard McElreath’s HDPI() function to calculate the highest density posterior interval. I’ll also use qnorm() to show how to obtain the 95% interval of the sample.

 

## HDPI
round(HPDI(sample = rbi_sample, prob = 0.95), 1)

## 95% Interval
round(qnorm(p = c(0.025, 0.975), mean = mean(rbi_sample), sd = sd(rbi_sample)), 1)

Prediction Interval

We can also use the samples to create a prediction interval for a single batter (remember, this will be larger than the confidence interval). To do so, we first simulate from our Bayesian model for the number of hits the batter had (100) and center that to the population average for hits.

 

## simulate from the posterior
rbi_sim <- sim(rbi_bayes, data = data.frame(hits_c = hits_centered))

## 95% Prediction interval
PI(rbi_sim, prob = 0.95)

Plotting Intervals Across All of the Data

Finally, we will plot our confidence intervals and prediction intervals across the range of our entire data. We will need to make sure and center the vector of hits that we create to the mean of hits for our original data set.

 

## Create a vector of hits
hit_df <- data.frame(H = seq(from = min(df$H), to = max(df$H), by = 1))

## Center the hits to the average number of hits in the original data
hit_df <- hit_df %>%
  mutate(hits_c = H - mean(df$H))

## Use link() to calculate a mean value across all simulated values from the posterior distribution of the model
mus <- link(rbi_bayes, data = hit_df)

## get the average and highest density interval from our mus
mu_avg <- apply(X = mus, MARGIN = 2, FUN = mean)
mu_hdpi <- apply(mus, MARGIN = 2, FUN = HPDI, prob = 0.95)

## simulate from the posterior across our vector of centered hits
sim_rbi_range <- sim(rbi_bayes, data = hit_df)

## calculate the prediction interval over this simulation
rbi_pi <- apply(sim_rbi_range, 2, PI, prob = 0.95)


## create a plot of confidence intervals and prediction intervals
plot(df$H, df$RBI, pch = 19, 
     col = "light grey",
     main = "RBI ~ H",
     xlab = "Hits",
     ylab = "RBI")
shade(rbi_pi, hit_df$H, col = col.alpha("light blue", 0.5))
shade(mu_hdpi, hit_df$H, col = "green")
lines(hit_df$H, mu_avg, col = "red", lwd = 3)

 

Similar to our frequentist intervals, we see that the 95% HDPI interval (green) is very narrow and close to the regression line (red). We also see that the 95% prediction interval (light blue) is much larger than the HDPI, as it is accounting for more uncertainty when attempting to make a prediction for a single individual within the population.

Wrapping Up

When making predictions it is always important to reflect the uncertainty that you have around your model’s outcome. Confidence intervals and prediction intervals are two ways to present your uncertainty for two different objectives. Confidence intervals say something about the uncertainty for the population average while prediction intervals attempt to provide uncertainty for an individual within the population. As a consequence, prediction intervals end up being wider than confidence intervals. The approach to sharing your model’s uncertainty can be done from either a frequentist or Bayesian perspective.

To access all of the code for this blog article, CLICK HERE.

 

Using Beta-Binomial Regression to Set Priors for Different Sample Sizes

In a prior post I explained an approach for using Bayes to estimate a player’s 3pt% based on prior knowledge of 3pt success in NBA players. This approach took advantage of the beta-binomial conjugate.

In that post, I constrained our analysis to only players that had 200 or more 3pt attempts during the course of a season. But, what if we don’t want to only focus on players that obtained a certain number of 3 point attempts? What about players who only took 100 attempts? 10 attempts? 2 attempts?! What can be said about their performance?

Today, we will discuss the approach of using beta-binomial regression to first establish a prior 3pt%, based on the number of 3pt attempts the player shot (sample size) and then update that prior based on the success that the player had in those attempts.

The code for this article is available on my GITHUB page.

A few references of books that I’ve found useful for building Bayesian models are at the end of this post. The approach here was inspired by Chapter 7 of David Robinson’s fantastic book, Introduction to Empirical Bayes: Examples from Baseball Statistics.

The Data

We will use the three point attempts data for all players in the 2022 NBA season. Data was scraped from basketball-reference.com. In total, there are 740 rows of data. Here is what the first four look like:

Plotting the Data

To help wrap our heads around the relationship between 3pt% and 3pt attempts, we will build a simple plot.

Notice that the number of 3 point attempts has some influence on 3pt%. First, as 3pt attempts increase, so does 3pt%. This is because better 3pt shooters will take more 3pt shots and, because they are good, their teams will also try and put them in position to take these shots.  Additionally, we can see that as 3pt attempts increase, the amount of variance in performance decreases. Players with under 100 3pt attempts are relatively spread out around the regression line.

Bayesian Shrinkage using the Beta-Binomial Conjugate

As a review from the prior blog article on this topic, we can use our knowledge of the 3pt% success of NBA players as a prior to help shrink observations towards the “expected” outcome. The amount of shrinkage a player exhibits will be dependent on the number of 3pt attempts they have. A smaller sample size means we have less confidence in the observed performance and thus greater shrinkage towards the population prior. Conversely, a large sample size means more confidence in the observed performance and therefore much less shrinkage.

From the prior article we estimated the alpha and beta parameters for our beta distribution to be 61.8 and 106.2, respectively. Recall that these values were estimated from the prior 2 seasons and using only those players with 200 or more 3pt attempts. The alpha and beta parameters provide us with a prior mean for NBA 3pt% of 36.8%.

We will apply this prior knowledge to the observations of all players in our 2022 data set by using the beta-binomial conjugate.

alpha <- 61.8
beta <- 106.2

prior_mu <- alpha / (alpha + beta)
prior_mu

tbl2022 <- tbl2022 %>%
  mutate(three_pt_missed = three_pt_att - three_pt_made,
         posterior_alpha = three_pt_made + alpha,
         posterior_beta = three_pt_missed + beta,
         posterior_three_pt_pct = posterior_alpha / (posterior_alpha + posterior_beta),
         posterior_three_pt_sd = sqrt((posterior_alpha * posterior_beta) / ((posterior_alpha + posterior_beta)^2 * (posterior_alpha + posterior_beta + 1))))

 

Next, we create a plot to see how the beta-binomial posterior for each player looks relative to the raw data (plotted on the left):

 

Combining our prior and observed values we can see (on the right) that the data are now constrained around the prior (36.8%). Players with a small number of observations (on the left) are pulled nearest to the line while players with larger observations (on the right) are less influenced by the prior and tend to remain closer to their observed performance.

The problem is that the prior mean (36.8%) is too high for the players with a small number of 3pt attempts. Surely we wouldn’t want to make the assumption that their performance is close to the prior for players that had over 200 attempts! For example, the players with under 50 attempts have an observed average 3pt% of 30% and a median 3pt% of 25% (just over 10% less than those those who had 200 or more attempts!).

To deal with this issue we need to account for shot attempts first so that we can estimate players with smaller sample sizes relative to a prior performance that is more appropriate for them (IE, a prior that is lower than that currently being assumed by our alpha and beta parameters).

Accounting for 3pt Shot Attempts

Our outcome variable is binomial (success and failures) so we will use a beta-binomial regression to estimate a prior for 3pt% while controlling for 3pt shot attempts.

suppressPackageStartupMessages({
  suppressWarnings({
    library(gamlss)
  })
})

fit_3pt <- gamlss(cbind(three_pt_made, three_pt_missed) ~ log(three_pt_att),
                  data = tbl2022,
                  family = BB(mu.link = "identity"))

## extract model coefficients
fit_3pt$mu.coefficients
fit_3pt$sigma.coefficients

 

Now we can use these model coefficients to fit an estimated 3pt% for each player based on their number of 3pt attempts. This estimation will serve as our prior, which we will then turn into a new posterior 3pt% for each player using our beta-binomial approach. Note that while the mean 3pt% for each player will vary depending on shot attempts the population sigma will be constant for all athletes, representing the variance that we expect all of those in the population to similarly exhibit.

tbl2022 <- tbl2022 %>%
  mutate(mu = fitted(fit_3pt, parameter = "mu"),
         sigma = fitted(fit_3pt, parameter = "sigma"),
         prior_alpha_reg = mu / sigma,
         prior_beta_reg = (1 - mu) / sigma,
         posterior_alpha_reg = prior_alpha_reg + three_pt_made,
         posterior_beta_reg = prior_beta_reg + three_pt_missed,
         posterior_mu_reg = posterior_alpha_reg / (posterior_alpha_reg + posterior_beta_reg))

 

We now have two estimates of each player’s 3pt%. One that was calculated using the beta-binomial conjugate with a prior of 36.8% (the average 3pt% for all shooters with 200 or more 3pt shots). The second estimate first establishes a prior based on the number of 3pt shots the player has taken and then updates that prior based on the individual player’s performance in those shots. We can plot the relationship between these two.

 


Notice that those with more 3 point attempts are close to the red line (intercept = 0, slope = 1) , representing perfect agreement between the two estimates, while those with less attempts are pulled further down, indicating that we estimate them to be poorer 3pt shooters.

Finally, we can compare the results from our beta-binomial regression prior with our other two estimates of performance (raw observations and our prior of 36.8%).

 

In the right most plot (beta-binomial regression prior) we see those with a small number of 3pt attempts are shrunk to a smaller prior 3pt% than those with a larger number of 3pt attempts.

 

Making an estimation for a new player

We can use this approach to estimate the performance of a new player, as well.

new_player <- data.frame( three_pt_att = 10, three_pt_made = 2, three_pt_missed = 10 - 2, three_pt_pct = 2 / 10 ) new_player %>%
  mutate(mu = predict(fit_3pt, newdata = new_player),
         sigma = exp(fit_3pt$sigma.coefficients),
         prior_alpha = mu / sigma,
         prior_beta = (1 - mu) / sigma,
         posterior_alpha = prior_alpha + three_pt_made,
         posterior_beta = prior_beta + three_pt_missed,
         posterior_mu = posterior_alpha / (posterior_alpha + posterior_beta)) %>%
  pivot_longer(cols = everything())

 

The new player took 10 three point shots, made 2, and has an observed 3pt% of 20%. Using our beta-binomial regression model we estimate the prior for a player with 10 attempts to be 0.274. Combining the prior with the 10 attempts we get a posterior 3pt% for the player of 0.272 (slightly below the average for the population of players who had 10 attempts).

Useful Resources

Some textbooks that I’ve found useful for exploring this type of work:

Deriving a Confidence Interval from an Estimate and a p-value

Although most journals require authors to include confidence intervals into their papers it isn’t mandatory for all journal (merely a recommendation). Additionally, there may be occasions when you are reading an older paper from a time when this mandate/recommendation was not enforced. Finally, sometimes abstracts (due to word count limits) might only present p-values and estimates, in which case you might want to quickly obtain the confidence intervals to help organize your thoughts prior to diving into the paper. In these instances, you might be curious as to how you can get a confidence interval around the observed effect when all you have is a p-value.

Bland & Altman wrote a short piece of deriving confidence interval from only an estimate and a p-value in a 2011 paper in the British Medical Journal:

Altman, DG. Bland, JM. (2011). How to obtain the confidence interval from a p-value. BMJ, 343: 1-2.

Before going through the approach, it is important to note that they indicate a limitation of this approach is that it wont be as accurate in smaller samples, but the method can work well in larger studies (~60 subjects or more).

The Steps

The authors’ list 3 easy steps to derive the confidence interval from an estimate and p-value:

  1. Calculate the test statistic for a normal distribution from the p-value.
  2. Calculate the standard error (ignogre the minus sign).
  3. Calculate the 95% CI using the standard error and a z-critical value for the desired level of confidence.
  4. When doing this approach for a ratio (e.g., Risk Ratio, Odds Ratio, Hazard Ratio), the formulas should be used with the estimate on the log scale (if it already isn’t) and then exponentiate (antilog) the confidence intervals to put the results back to the normal scale.

Calculating the test statistic

To calculate the test statistic use the following formula:

z = -0.862 + sqrt(0.743 – 2.404 * log(p.value))

Calculating the standard error

To calculate the standard error use the following formula (remember that we are ignoring the sign of the estimate):

se = abs(estimate) / z

If we are dealing with a ratio, make sure that you are working on the log scale:

se = abs(log(estimate)) / z

Calculating the 95% Confidence Limits

Once you have the standard error, the 95% Confidence Limits can be calculated by multiplying the standard error by the z-critical value of 1.96:

CL.95 = 1.96 * se

From there, the 95% Confidence Interval can be calculated:

low95 = Estimate – CL.95
high95 = Estimate + CL.95

Remember, if you are working with rate statistics and you want to get the confidence interval on the natural scale, you will need to take the antilog:

low95 = exp(Estimate – CL.95)
high95 = exp(Estimate + CL.95)

 

Writing a function

To make this simple, I’ll write everything into a function. The function will take three arguments, which you will need to obtain from the paper:

  1. p-value
  2. The estimate (e.g., difference in means, risk ratio, odds ratio, hazard ratio, etc)

The function will default to log = FALSE but if you are working with a rate statistic you can change the argument to log = TRUE to get the results on both the log and natural scales. The function also takes a sig_digits argument, which defaults to 3 but can be changed depending on how many significant digits you need.

estimate_ci_95 <- function(p_value, estimate, log = FALSE, sig_digits = 3){
  
  if(log == FALSE){
    
    z <- -0.862 + sqrt(0.743 - 2.404 * log(p_value))
    z

    se <- abs(estimate) / z
    se
    
    cl <- 1.96 * se
    
    low95 <- estimate - cl
    high95 <- estimate + cl
    
    list('Standard Error' = round(se, sig_digits),
         '95% CL' = round(cl, sig_digits),
         '95% CI' = paste(round(low95, sig_digits), round(high95, sig_digits), sep = ", "))
    
  } else {
    
    if(log == TRUE){
      
      z <- -0.862 + sqrt(0.743 - 2.404 * log(p_value))
      z
      
      se <- abs(estimate) / z
      se
      
      cl <- 1.96 * se
      
      low95_log_scale <- estimate - cl
      high95_log_scale <- estimate + cl
      
      low95_natural_scale <- exp(estimate - cl)
      high95_natural_scale <- exp(estimate + cl)
      
      list('Standard Error (log scale)' = round(se, sig_digits),
           '95% CL (log scale)' = round(cl, sig_digits),
           '95% CL (natural scale)' = round(exp(cl), sig_digits),
           '95% CI (log scale)' = paste(round(low95_log_scale, sig_digits), round(high95_log_scale, sig_digits), sep = ", "),
           '95% CI (natural scale)' = paste(round(low95_natural_scale, sig_digits), round(high95_natural_scale, sig_digits), sep = ", "))
      
    }
    
  }
  
}

 Test the function out

The paper provides two examples, one for a difference in means and the other for risk ratios.

Example 1

Example 1 states:

“the abstract of a report of a randomised trial included the statement that “more patients in the zinc group than in the control group recovered by two days (49% v 32%,P=0.032).” The difference in proportions was Est = 17 percentage points, but what is the 95% confidence interval (CI)?

estimate_ci_95(p_value = 0.032, estimate = 17, log = FALSE, sig_digits = 1)

 

 

Example 2

Example 2 states:

“the abstract of a report of a cohort study includes the statement that “In those with a [diastolic blood pressure] reading of 95-99 mm Hg the relative risk was 0.30 (P=0.034).” What is the confidence interval around 0.30?”

Here we change the argument to log = TRUE since this is a ratio statistic needs to be on the log scale.

estimate_ci_95(p_value = 0.034, estimate = log(0.3), log = TRUE, sig_digits = 2)

Try the approach out on a different data set to confirm the confidence intervals are calculated properly

Below, we build a simple logistic regression model for the PimaIndiansDiabetes data set from the {mlbench} package.

  • The odds ratios are already on the log scale so we set the argument log = TRUE to ensure we get results reported back to us on the natural scale, as well.
  • We use the summary() function to obtain the model estimates and p-values.
  • We use the confint() function to get the 95% Confidence Intervals from the model.
  • To get the confidence intervals on the natural scale we also take the exponent, exp(confint()).
  • We use our custom function, estimate_ci_95(), to see how well the results compare.
## get data
library(mlbench)
data("PimaIndiansDiabetes")
df <- PimaIndiansDiabetes

## turn outcome variable into a numeric (0 = negative for diabetes, 1 = positive for diabetes)
df$diabetes_num <- ifelse(df$diabetes == "pos", 1, 0)
head(df)

## simple model
diabetes_glm <- glm(diabetes_num ~ pregnant + glucose + insulin, data = df, family = "binomial")

## model summary
summary(diabetes_glm)

 

 

Calculate 95% CI from the p-values and odds ratio estimates.

Pregnant Coefficient

## 95% CI for the pregnant coefficient
estimate_ci_95(p_value = 2.11e-06, estimate = 0.122, log = TRUE, sig_digits = 3)

 

Glucose Coefficient

## 95% CI for the glucose coefficient
estimate_ci_95(p_value = 2e-16, estimate = 0.0375, log = TRUE, sig_digits = 3)

 

Insulin Coefficient

## 95% CI for the insulin coefficient
estimate_ci_95(p_value = 0.677, estimate = -0.0003, log = TRUE, sig_digits = 5)

 

Evaluate the results from the custom function to those calculated with the confint() function.

## Confidence Intervals on the Log Scale
confint(diabetes_glm)

## Confidence Intervals on the Natural Scale
exp(confint(diabetes_glm))

We get nearly the same results with a bit of rounding error!

Hopefully this function will be of use to some people as they read papers or abstracts.

You can find the complete code in a cleaned up markdown file on my GITHUB page.

Tyrese Maxey’s 3pt%, Bayes, Shrinkage

Some friends were discussing Philadelphia 76er’s point guard, Tyrese Maxey’s, three point% today. They were discussing how well he has performed over 72 games with a success rate of 43% behind the arc (at the time this data was scraped, 4/6/2022). While his percentage from 3pt range is very impressive I did notice that he has 294 attempts, which is less than 3 out of the 4 player’s that are ahead of him (Kyrie only has 214 attempts and he is ranked 3rd at the time of this writing) and Steph Curry is just behind Maxey in the ranking (42.4% success) with nearly 70 more attempts.

The question becomes, how can we be of Maxey’s three point percentage relative to those with more attempts? We will take a Bayesian approach, using a beta conjugate, to consider the success rate of these players relative to what we believe the average three point success rate is for an NBA shooter (our prior), which we will determine from observing 3 point shooting over previous 3 seasons.

NOTE: On basketball-reference.com, they have a nice check box that automatically will filter out players that are non-qualifiers for rate stats. After playing around with this, it appears that 200 attempts is their cut off. So, I will keep that and filter the data down to only those with 200 or more 3pt attempts.

All of the code, web scrapping, and csv files of the data (if you are looking to run it prior to when I scrapped it) are available on my GITHUB PAGE.

Exploratory Data Analysis

First, let’s view the top 10 three point shooters this season (size of the dot represents the number of three point attempts taken).

Visualize the distribution of three point attempts and three point% for the 2022 season, so far.

Establishing Our Prior

Since we are dealing with a binary outcome of successes (made the shot) and failures (missed the shot) we will use the beta distribution, which is the conjugate prior for the binomial distribution.

The beta distribution has two parameters, alpha and beta. To determine what these parameters should be, we will use the method of moments with the data from the previous three seasons.

To do this, we need to first find the mean and variance for the previous three seasons.

Next, we create a function that calculates alpha and beta based on the mean and variance from our observed data.

# function for calculating alpha and beta
beta_parameters <- function(dist_avg, dist_var){
  alpha <- dist_avg * (dist_avg * (1 - dist_avg)/dist_var - 1)
  beta <- alpha * (1 - dist_avg)/dist_avg
  list(alpha = alpha,
       beta = beta)
}


The function works to produce the two parameters we need. The data is returned in list format, so we will call each element of the list and store the respective values in their own variable.

The function works to produce the two parameters we need. The data is returned in list format, so we will call each element of the list and store the respective values in their own variable.


The alpha and beta parameters derived from our method of moments function appear to produce the mean and standard deviation correctly. We can plot this distribution to see what it looks like.

Update the 3pt% of the players in the 2022 season with our meta prior

We calculate our Bayes adjusted three point percentage for the players by adding their successes to `alpha` and their failures to `beta` and then calculating the new posterior percentage as

alpha / (alpha + beta)

and the posterior standard deviation as

sqrt((alpha * beta) / ((alpha + beta)^2 * (alpha + beta + 1)))

tbl2022 <- tbl2022 %>%
  mutate(three_pt_missed = three_pt_att - three_pt_made,
         posterior_alpha = three_pt_made + alpha,
         posterior_beta = three_pt_missed + beta,
         posterior_three_pt_pct = posterior_alpha / (posterior_alpha + posterior_beta),
         posterior_three_pt_sd = sqrt((posterior_alpha * posterior_beta) / ((posterior_alpha + posterior_beta)^2 * (posterior_alpha + posterior_beta + 1))))

Have any of the players in the top 10 changes following in the adjustment?

  • We see that Desmond Bane has jumped Kyrie, who only had 214 attempts. Kyrie dropped from 3rd to 6th.
  • Tyrese Maxey moves up one spot to 4.
  • Grant Williams drops out of the top 10 while Tyrese Haliburton moves up into the top 10

We can plot the results of these top 10 players showing the posterior Bayes three point% relative to their observed three point%.

Show the uncertainty in Tyrese Maxies Performance versus Luke Kennard, who has 409 attempts

kennard <- tbl2022 %>%
  filter(player == "Luke Kennard")

maxey <- tbl2022 %>%
  filter(player == "Tyrese Maxey")

plot(density(rbeta(n = 1e6, shape1 = maxey$posterior_alpha, shape2 = maxey$posterior_beta)),
     col = "blue",
     lwd = 4,
     ylim = c(0, 20),
     xlab = "3pt %",
     main = "Bayes Adjusted 3pt%\nBlue = Tyrese Maxey | Red = Luke Kennard")
lines(density(rbeta(n = 1e6, shape1 = kennard$posterior_alpha, shape2 = kennard$posterior_beta)),
      col = "red",
      lwd = 4)

If we sample from the posterior for both players, how much better is Kennard?

maxey_sim <- rbeta(n = 1e6, shape1 = maxey$posterior_alpha, shape2 = maxey$posterior_beta)

kennard_sim <- rbeta(n = 1e6, shape1 = kennard$posterior_alpha, shape2 = kennard$posterior_beta)

plot(density(kennard_sim - maxey_sim),
     lwd = 4,
     col = "black",
     main = "Kennard Posterior Sim - Maxie Posterior Sim",
     xlab = "Difference between Kennard & Maxie")
abline(v = 0,
       lwd = 4,
       lty = 2,
       col = "red")

On average, Kennard was better in ~74% of the 1,000,000 simulations.

Long story short, Tyrese Maxey has been a solid 3pt shooter, he just happens to play on a team where James Harden takes many of the shots (maybe he should distribute the ball more?).

One last thing…..Shrinkage

So, what happened? Basically, the Bayesian adjustment created “shrinkage” whereby the players that are above average are pulled down slightly towards the population average and the players below average are pulled up slightly towards the population average. The amount of shrinkage depends on the number of attempts the player has had (the size of their sample). More attempts leads to less shrinkage (more certainty about their performance) and smaller attempts leads to more shrinkage (more certainty about their). Basically, if we haven’t seen you shoot very much then our best guess is that you are probably closer to average until we are provided more evidence to believe otherwise.

Since we were originally dealing with only players that have had 200 or more three point attempts, let’s scrape all players from the 2022 season and apply the same approach to see what shrinkage looks like.

url2022 <- read_html("https://www.basketball-reference.com/leagues/NBA_2022_totals.html")

tbl2022a <- html_nodes(url2022, 'table') %>%
  html_table(fill = TRUE) %>%
  pluck(1) %>%
  janitor::clean_names() %>%
  select("player", three_pt_att = "x3pa", three_pt_made = "x3p", three_pt_pct = "x3p_percent") %>%
  filter(player != "Player") %>%
  mutate(across(.cols = three_pt_att:three_pt_pct,
                ~as.numeric(.x))) %>%
  filter(!is.na(three_pt_pct)) %>%
  arrange(desc(three_pt_pct)) %>%
  mutate(three_pt_missed = three_pt_att - three_pt_made,
         posterior_alpha = three_pt_made + alpha,
         posterior_beta = three_pt_missed + beta,
         posterior_three_pt_pct = posterior_alpha / (posterior_alpha + posterior_beta),
         posterior_three_pt_sd = sqrt((posterior_alpha * posterior_beta) / ((posterior_alpha + posterior_beta)^2 * (posterior_alpha + posterior_beta + 1))))


tbl2022a %>%
  mutate(pop_avg = alpha / (alpha + beta)) %>%
  ggplot(aes(x = three_pt_pct, y = posterior_three_pt_pct, size = three_pt_att)) +
  geom_point(color = "black",
             alpha = 0.8) +
  geom_hline(aes(yintercept = pop_avg),
             color = "green",
             size = 1.2,
             linetype = "dashed") +
  geom_abline(intercept = 0,
              slope = 1,
              size = 1.2,
              color = "green") +
  labs(x = "Observed 3pt%",
       y = "Bayesian Adjusted 3pt%",
       size = "Attempts",
       title = "Shirnkage of 3pt% using Beta-Conjugate",
       caption = "Data Source: https://www.basketball-reference.com/leagues/NBA_2022_totals.html")

What does this tell us?

  • Points closest to the diagonal line (the line of equality — points on this line represent 0 difference between Bayes adjusted and Observed 3pt%) see much almost no shrinkage towards the observed 3pt%.
  • Notice that the points nearest the line also have tend to be larger, meaning we have more observations are more certainty of that player’s true skill.
  • The horizontal dashed line represents the population average (determined from the alpha and beta parameters obtained from previous 3 seasons).
  • Notice that the smaller points (less observations) get shrunk towards this line given we haven’t seen enough from that player to believe differently. For example, the tiny dot to the far right indicates the player has an observed 3pt% of 100%, which we wouldn’t really believe to be sustainable for the full season (maybe the player took one or two shots and got lucky?). So that point is pulled downwards towards the dashed line as our best estimate is that the player ie closer to an average shooter.