Category Archives: Model Building in R

XGBOOST Tuning – Tutorial in {xgboost} and {tidymodels}

A colleague recently asked me about XGBOOST (Extreme Gradient Boosting) models so I figured I’d put together a short tutorial of using XGBOOST both with the {xgboost} package and within the {tidymodels} environment.

Like all machine learning models, XGBOOST, has a number of different knobs and levers we can twist, push, and pull in order to tune the model and identify the best parameters for learning from the data. One can get a look at these parameters by loading the {xgboost} package and then typing ?xgboost into the console.

XGBOOST falls in the class of tree-based models. It is an effective machine learning algorithm for identifying patterns in data and it can be used for regression or classification problems.

When fitting the model in {tidymodels} it is pretty straight forward as far as the {tidymodels} syntax goes. However, if fitting models with the {xgboost} package, there are a number of considerations we need to make with the data. For example. XGBOOST can only use numeric data, so any categorical features will need to be one-hot-encoded. Additionally, the data cannot be in a data frame form, but rather needs to be turned into a matrix.

Let’s jump in!

(NOTE: All code is available on my Github page.)

Load Data & Packages

We will keep the data simple and use the mtcars data from base R. XGBOOST is well suited for complex data with a lot of interactions and non-linearity, but I’d like to use a simple data set so that the models can run fast for you and you get the gist of fitting them. Tuning all of the parameters can take a lot of time if you are dealing with a big data set.

Our goal is to predict mpg using the other features in the data.

{xgboost} for a regression problem

First we split our data into a training and testing data set.

set.seed(2025)
N <- nrow(mtcars)
ids <- sample(x = 1:N, size = floor(N * 0.7), replace = FALSE)
train <- mtcars[ids, ]
test <- mtcars[-ids, ]

Prepare the data

For the {xgboost} package to run we need the predictor variables to be in a data matrix and the outcome variables to be a vector. We then place the prepared data into a dense matrix so we can run our XGBOOST model.

train_x <- data.matrix(train[, -1])
train_y <- train$mpg

test_x <- data.matrix(test[, -1])
test_y <- test$mpg

xgb_train <- xgb.DMatrix(data = train_x, label = train_y)
xgb_test <- xgb.DMatrix(data = test_x, label = test_y)

Tune the model

First, we walk through an example of of how the xgb.cv function works and what it outputs. Then we can write a loop to extract the relevant information for the purposes of the tuning the hyperparameters.

Notice that we specified a variety of hyperparameters (e.g., max.depth, eta, subsample, and colsample_bytree). These aren’t all of the possible hyperparameters that can be tuned, though. Rather than hash out them all out here, you can find a list of their definitions by either using the help function in the R console, ?xgb.cv() or checking out THIS webpage.

fit_example <- xgb.cv(data = xgb_train,
                      params = list(booster = "gbtree",
                        objective = "reg:squarederror",
                        max.depth = 15,
                        eta = 0.1,
                        subsample = 0.1,
                        colsample_bytree = 0.5),
                      eval_metric = "rmse",
                      nfold = 5,
                      watchlist = list(train = xgb_train, test = xgb_test),
                      early_stopping_rounds = TRUE,
                      nrounds = 1000)

There are a number of features within the model that we can extract:

  • best_iteration indicates the best iteration of the model fit, represented as the nrounds argument in our function.

  • evaluation_log provides us with information about the RMSE for all iterations of model fitting.

We can directly select the best iteration from the evaluation log.

Here we see the RMSE for the training and testing data at the best iteration.

Now that we know what we are looking at with the xgb.cv() output, let’s write a grid of possible values for the hyperparameters and create a data frame, grid_params that stores this information.

eta <- seq(0.1, 0.9, 0.1)
max_depth <- c(5, 10, 15)
subsample <- c(0.5, 0.75, 1)
colsample_bytree <- c(0.5, 0.75, 1)
nrounds <- seq(100, 500, 100)

grid_params <- expand.grid(nrounds = nrounds,
                           max_depth = max_depth, 
                           subsample = subsample,
                           colsample_bytree = colsample_bytree,
                           eta = eta)

head(grid_params)

n <- nrow(grid_params)

Next, we need an empty data frame to store the best iteration for each row of our grid parameters data frame.

output_df <- data.frame("iter" = rep(NA, n), 
                           "train_rmse_mean" = rep(NA, n), 
                           "train_rmse_std" = rep(NA, n), 
                           "test_rmse_mean" = rep(NA, n), 
                           "test_rmse_std" = rep(NA, n))

Now run a for() loop to find the best parameters for the data!

 

for(i in 1:n){
  
  fit <- xgb.cv(data = xgb_train,
                params = list(booster = "gbtree",
                      objective = "reg:squarederror",
                      max.depth = grid_params$max_depth[i],
                      eta = grid_params$eta[i],
                      subsample = grid_params$subsample[i],
                      colsample_bytree = grid_params$colsample_bytree[i]),
                      eval_metric = "rmse",
                      nfold = 5,
                      nrounds = grid_params$nrounds[i],
                      watchlist = list(train = xgb_train, test = xgb_test),
                      early_stopping_rounds = TRUE,
                      grid_params$nrounds[i],
                      verbose = FALSE)
  
  best_fit <- fit$best_iteration
  
  output_df[i, 1] <- fit$evaluation_log %>% as.data.frame() %>% filter(iter == best_fit) %>% pull(iter)
  output_df[i, 2] <- fit$evaluation_log %>% as.data.frame() %>% filter(iter == best_fit) %>% pull(train_rmse_mean)
  output_df[i, 3] <- fit$evaluation_log %>% as.data.frame() %>% filter(iter == best_fit) %>% pull(train_rmse_std)
  output_df[i, 4] <- fit$evaluation_log %>% as.data.frame() %>% filter(iter == best_fit) %>% pull(test_rmse_mean)
  output_df[i, 5] <- fit$evaluation_log %>% as.data.frame() %>% filter(iter == best_fit) %>% pull(test_rmse_std)

}

Join the grid_params data set with the output_df` of our for() loop and select the output that had the lowest train set RMSE.

Fit the model

After tuning, we want to fit the model to the optimized hyper-parameters stored in the best_params element.

## fit model
fit_mpg <- xgboost(data = xgb_train,
                   params = list(booster = "gbtree",
                      objective = "reg:squarederror",
                      max.depth = best_params$max_depth,
                      eta = best_params$eta,
                      subsample = best_params$subsample,
                      colsample_bytree = best_params$colsample_bytree),
                      eval_metric = "rmse",
                      nrounds = best_params$nrounds)

Variables of Importance

Plot the variables of importance from the tuned model.

Predictions

We can plot the predictions and obtain the RMSE on our test set.

train_pred_y <- predict(fit_mpg, xgb_train)
test_pred_y <- predict(fit_mpg, xgb_test) test_x %>%
  bind_cols(mpg = test_y) %>%
  mutate(pred_mpg = test_pred_y) %>%
  ggplot(aes(x = pred_mpg, y = mpg)) +
  geom_point() +
  geom_abline()

sqrt(mean((test_y - test_pred_y)^2))

{tidymodels} XGBOOST for regression

We can do the same type of XGBOOST model fitting in {tidymodels}.

Train/Test Split

set.seed(3333)
car_split <- initial_split(mtcars)
car_split

train <- training(car_split)
test <- testing(car_split)

Cross Validation Folds

set.seed(34)
cv_folds <- vfold_cv(
  data = train, 
  v = 5
  ) 

Model Specification

In the boost_tree() function we specify which hyperparameters we want to tune.

boost_spec <- boost_tree( trees = tune(), 
      tree_depth = tune(), 
      min_n = tune(), 
      loss_reduction = tune(), 
      sample_size = tune(), 
      mtry = tune(), 
      learn_rate = tune() ) %>%
  set_engine("xgboost") %>%
  set_mode("regression")

Workflow

Create the workflow, add the formula we want for the model, and add the model specifications.

xgb_wf <- workflow() %>%
  add_formula(mpg ~ .) %>%
  add_model(boost_spec)

Set up tuning grid

Instead of specifying specific values for each of the tuning parameters, as we did in the {xgboost} package example, we will use the grid_latin_hypercube() function to construct the parameter grid for us. We pass it a size argument to indicate how large (how many rows) we want the grid to be.

 

xgb_grid <- grid_latin_hypercube(
  tree_depth(),
  trees(),
  min_n(),
  loss_reduction(),
  sample_size = sample_prop(),
  finalize(mtry(), train),
  learn_rate(),
  size = 40
)

Hyper parameter tuning on cross validated folds

doParallel::registerDoParallel()

set.seed(66)
xgb_res <- tune_grid(
  xgb_wf,
  resamples = cv_folds,
  grid = xgb_grid,
  control = control_grid(save_pred = TRUE)
)

Obtaining cross validated outputs and identify the best model


Fit the final model

boost_spec <- boost_tree(mtry = best_xgb %>% pull(mtry),
                         trees = best_xgb %>% pull(trees),
                         min_n = best_xgb %>% pull(min_n),
                         tree_depth = best_xgb %>% pull(tree_depth),
                         learn_rate = best_xgb %>% pull(learn_rate),
                         loss_reduction = best_xgb %>% pull(loss_reduction),
                         sample_size = best_xgb %>% pull(sample_size)) %>%
  set_engine("xgboost") %>%
  set_mode("regression")


boost_fit <- boost_spec %>%
  fit(mpg ~ ., data = train)

Evaluate Test Set Performance

augment(boost_fit,
        new_data = test) %>%
  rmse(truth = mpg, estimate = .pred)

augment(boost_fit,
        new_data = test) %>%
  rsq(truth = mpg, estimate = .pred)


augment(boost_fit,
        new_data = test) %>%
  ggplot(aes(x = .pred, y = mpg)) +
  geom_jitter() +
  geom_abline(intercept = 0, slope = 1)

Wrapping up

XGBOOST is a flexible model that can be used for continuous outcomes, binary outcomes, and multi-class classification tasks. The model does have a number of tuning parameters that can help to optimize performance. However, it is relatively easy to create a grid of potential values and either write a for() loop or use the {tidymodels} framework to do the heavy lifting for you.

To obtain the full code and RMarkdown file check out my Github page.

Crossed vs Nested Random Effects

I was reading a paper this weekend and the researchers used a mixed model to for their data given that they were dealing with repeated measures of individuals and teams. Mixed models have become common in sport science research and I’ve written about them a few times in this blog:

As I was reading the paper I was wondering, because it wasn’t specified in the methods section, if the researchers had given any thought to whether their random effects were crossed or nested. Then I thought, maybe people aren’t thinking about this and perhaps a short blog article with an example would be useful. So here we are…

For brevity, I’ll only present short snippets of code in the blog article. The entire code is available on my Github Page

Data

We need to simulate some data to work with. To keep things simple, we will create a data set of a league that has 6 teams and 60 players. Each player will have a player value metric, the outcome of interest, for every team they played on during the season.

suppressPackageStartupMessages({
  suppressMessages({
    suppressWarnings({
    library(tidyverse)
    library(lme4)
    library(arm)
    })
  })
})


theme_set(theme_bw())

set.seed(225544)
teams <- sample(c("Sharks", "Bears", "Turtles", "Bisons", "Jaguars", "Pythons"), size = 60, replace = TRUE)
players <- as.factor(sample(1:60, size = 60, replace = TRUE))
intercept <- 50
team_effect <- case_when(teams == "Sharks" ~ rnorm(1, mean = 0, sd = 7), 
                         teams == "Bears" ~ rnorm(1, mean = 0, sd = 2),
                         teams == "Turtles" ~ rnorm(1, mean = 0, sd = 1),
                         teams == "Bisons" ~ rnorm(1, mean = 0, sd = 13),
                         teams == "Jaguars" ~ rnorm(1, mean = 0, sd = 5),
                         teams == "Pythons" ~ rnorm(1, mean = 0, sd = 3))
player_effect <- rnorm(length(levels(players)), mean = 0, sd = 10)
random_noise <- 2



dat <- data.frame( team = teams, player_id = players ) %>%
  mutate(
    mu = intercept + team_effect + player_effect[players],
    player_value = round(mu + rnorm(n(), mean = 0, sd = random_noise), 1)
  ) %>%
  dplyr::select(-mu)

dat %>%
  head()

EDA

How many unique players were on each team during the season?

Which players played on multiple teams?

Team Summary Stats

Crossed vs Nested Effects

We have 14 players that played on multiple teams (Player 11 ended up playing for all 6 teams!) and, as a consequence, different teams had different numbers of individual players. For example, the Bears had 13 different players while the Pythons had 6 different players. This small detail is important to consider with respect to our random effects because crossed and nested effects behave differently with respect to how they partition variance.

When constructing mixed models we usually have a data hierarchy where observations are grouped or nested within specific categories. When the observations are nested the observations are exclusive to specific groups. Some examples:

  • Classes are nested within schools but classes in one school cannot also be grouped within a different school.
  • Athletes competing in the Olympics are nested within their respective countries but an individual cannot compete for two different countries in the same Olympic Games.

By contrast, crossed effects can occur when observations move between groups during the same study/observation period. A few examples of crossed effects are:

  • A student might start the year at one school and then move half way through the semester and attend a different school.
  • Players in team sport might be traded mid-season and play for different teams.
  • In a study where the experimental and placebo groups contain both male and female participants, we would have crossed random effects to account for sex being in both groups.
  • Looking at sports teams, we may be analyzing the behavior of position groups on different teams. The position groups are the same for each team (e.g., Forwards, Guards, and Centers on Basketball teams) so the model has crossed effects.

To help solidify this concept a bit more, I’ve created the below drawing.

We see that in the Completely Nested example there are individual players that only player for their respective teams. Contrast that to the Completely Crossed example, where every player ends up playing on each team. This would, obviously, be an incredibly weird scenario and not one we’d see in sport but probably one that you’d see in researched where there is a crossover of treatments. Finally, the Partially Nested & Crossed example has a few players playing for multiple teams (for example, Player 1 played for both Team 1 and Team 2) and some players that stay with a single team during the season. This version is the one that we most frequently encounter in team sport.

Now that we have a general understanding of the difference between crossed and random effects, let’s see how this looks when we build a model on our simulated data and (most importantly) let’s see how specifying the model to have crossed or nested random effects changes the model output.

Model Examples

Crossed

Since we commonly deal with crossed (or partially nested/crossed) data in team sport, I’ll start with modeling the data to have crossed random effects. As a side note, most of the time when I look at people’s models in sport science they are creating the model structure like this, by default.

We will build random intercept model (since we didn’t simulate any fixed effects) and allow a separate intercept variance for both team and player_id, keeping in mind that this specifies that there are players who are potentially playing on different teams (crossed observations) across the season.

lme_crossed <- lmer(player_value ~ 1 + (1|team) + (1|player_id), data = dat)

Nested

Next, we fit the random intercept model but this time we treat our random effects of player_id and team as nested. That is to say that we are allowing the players to be nested within their respective teams. To review, this means that we are treating this data as if each player only played on a single team during the season and never changed teams.

lme_nested <- lmer(player_value ~ 1 + (1|team) + (1|team:player_id), data = dat)

Comparing the two models

First, we notice that the fixed effects are slightly different between the two models, but the difference in small in this simulated data.

The random effects look very different between the two model structures. In the crossed effects model we get two different random effects — one for the player_id and one for the team. However, in the nested effects model we have a random effect for team but then we have a random effect for player_id nested within each team. So, in the nested model a player that played for multiple teams will have different random intercept for each of the teams they were nested in. Both models, of course, also have a residual in the random effects, however the values are very different.

Let’s compile the random effects side-by-side to get a better appreciation for the differences. We will start with the `team` random effect since it is present in both models.

There are pretty stark differences between the random intercepts of the nested and crossed models! Recall from our EDA that the Bears had the highest average value of all the teams and they also had the fewest number of different players (5).

The player random effects get a little wonky because players who played for multiple teams will end up having different random effects depending on which team we nested them in. Conversely, in the crossed model we get one random effect estimate per player.

To join the random effects from both models we have to do a bit of work on the nested model to extract the info. (NOTE: Since the table is really long I’ll only show the first 10 rows here but you can got to the Github page to see the full output.)

Joining the two model random effects on player_id we see that player’s have some very different random effects between the two models. In particular, players that played on different teams can differ greatly between the models.

For example, let’s look at Player 21.

In the crossed model we find that Player 21 has a random intercept that is 1.6 points below league average (the fixed effect intercept). However, in the nested model, when Player 21 was on the Bears he was 1.15 points above the population intercept and 0.74 points below the population intercept when he was on the Turtles. This is partially due to the team effect, where the the Bears were the best team in the league (highest average value) and the Turtles were the worst team in the league (lowest average value).

Wrapping Up

Hopefully this tutorial provided a brief, but useful, overview of crossed versus random effects during mixed model construction. While most people seem to gravitate towards crossed effects by default there may be times where nested effects are indeed required for the structure of the data. As the example shows, how we specify the model can dramatically change the outputs and thus the inference that we can derive from these outputs.

To access the full code please see the Github page.

K-Nearest Neighbor: {tidymodels} tutorial

When working on data science or research teams, it often helps to have a workflow that makes it easy for teammates to review your work and add additional components to the data cleaning or model structure. Additionally, it ensures that the steps in the process are clear so that debugging is easy.

In R, the {tidymodels} package offers one such workflow and in python, folks seem to prefer scikit learn. This week, I’m going to walk through a full workflow in tidymodels using the K-nearest neighbor algorithm. Some of the things I’ll cover include:

  • Splitting the data in test/train sets and cross validation folds
  • Setting up the model structure (what tidymodels refers to as a recipe)
  • Creating preprocessing steps
  • Compiling the preprocessing and model structure together into a single workflow
  • Tuning the KNN model
  • Identifying the best tuned model
  • Evaluating the model on the test set
  • Saving the entire worflow and model to be used later on with new data
  • Making predictions with the saved workflow and model on new data

I’ve provided a number of tidymodels tutorials on this blog so feel free to search the blog for those. Additionally, all of my tidymodels tutorials and templates are available in a GitHub Repo. Alternatively, if Python is more you jam, I did do a previous blog comparing the workflow in tidymodels to Scikit Learn, which can be found HERE.

This entire tutorial and data are available on my GitHub page if you’d like to code along.

Load & Clean Data

For this tutorial I’m going to use the 2021 FIFA Soccer Ratings, which are available on Kaggle. There are several years of ratings there, but we will concentrate on 2021 with the goal being to estimate a player’s contract value based on the various FIFA ratings provided and the position they play.

First, I load the tidyverse and tidymodels libraries and read in the data. I’m going to drop goalies so that we only focus on players who play the field and, since the data has a large number of columns, I’m going to get only the columns we care about: Information about the player (playerID, name, height, weight, club, league, position, and contract value) and then the various FIFA ratings.

The only column with missing data that impacts our analysis is the team position column, as this is a feature in the data set. Additionally, there are 207 players with a contract value of 0 (all 195 players missing a team position have a 0 contract value). So, perhaps these are players who weren’t on a club at the time the ratings were assigned.

I’m going to remove these players from our analysis data set, but I’m going to place them in their own data set because we will use them later to make estimates of their contract value.

Visualizing the Data

Let’s take a look at a visual of the contract value for all of the players in our data set. Since this is severely right skewed, we will plot it on the log scale.

We also notice there are a variety of positions (way more than I would have guessed in soccer!).

Finally, we want to explore how playing position might influence contract value.

tidymodels set up

Now that we have our data organized we want to set up a tidymodels workflow to estimate the contract value of every player using a K-nearest neighbor model.

First, we split the data in train and test splits and then further split the training data into 5 cross validation folds so that we can tune the model to find the best number of K-neighbors.

Next, we specify that we want to use a KNN model and the outcome variable as regression, since it is continuous (as opposed to classification).

Now that the model is specified we need to set up our preprocessing steps. This is one thing that tidymodels is super useful for. Normally, we’d need to do all this preprocessing before fitting the model and, as in the case of KNN and most machine learning models, we’d need to standardize the variables in the training set and then store the means and standard deviations of those variables so that we can use them to standardize the test set or future data. In tidymodels, we set up our preprocessing workflow and store it and it contains all that information for us! We can simply load it and use it for any downstream analysis we want to do. There are three preprocessing steps we want to add to our workflow:

  1. log transform the dependent variable (contract value)
  2. Dummy code the positions
  3. Standardize (z-score) all off the numeric variable in the model (the ratings)

We can view the preprocessing steps using the prep() function.

 

With the model specified and the preprocessing recipe set up we are ready to compile everything into a single workflow.

We can look at the workflow to make sure everything makes sense before we start fitting the model.

Next, we tune the model using the cross validated folds that we set up on our training data set. We are trying to find the optimal number of k-neighbors that minimizes the RMSE of the outcome variable (the log of contract value).

We see the results are stored in a list for every one of our 5 cross validation splits. We can view the model metrics for each of the folds.

The model with 10 neighbors appears to have the lowest RMSE. So, we want to pull that value directly from this table so that we can store it and use it later when we go to fit the final model.

Fitting the Final Model Version 1: Using finalize_workflow()

Version 1 that I’ll show for fitting the final model uses the finalize_workflow() function. I like this approach if I’m building my model in a local session and then want to see how well it performs on the test session (which is what this function does). This isn’t the approach I take when I need to save everything for downstream use (which we will cover in Version 2).

First, fit the best model using the finalize_workflow() function.

Then, we get the model predictions for this final workflow on the the test data set.

We can plot the predicted contract value compared to the actual contract value and calculate the test set RMSE.

Fitting the Final Model Version 2: Re-sepcify the model on the training set, save it, and use it later

In this second version of fitting the final model, we will take the best neighbors from the tuning phase, re-specify the model to the training set, reset the workflow, and then save these components so that we can use them later.

First, re-specify the model and notice I set the neighbors argument to best_neighbors, which we saved above when we tuned the model.

We then use this finalized/re-specified model to reset the workflow. Notice it is added to the add_model() function however the recipe, with our preprocessing steps, does not change.

With the workflow set up we now need to do the final two sets:

  1. Fit the model to the entire data set and extract the recipe using extract_recipe() since we will need to save this to preprocess any new data before making predictions.
  2. Fit the model to the final data and extract the model itself, using extract_fit_parsnip() so we can use the model to make future predictions.

Now we save the recipe and model as .rda files. We can then load these two structures and use them on new data!

We have 207 players with 0 contract value, of which 195 had no team position. So, that leaves us with 12 players that we can estimate contract value for with our saved model.

First, we use the bake() function to apply the saved recipe to the new data so that we can preprocess everything appropriately.

With the new data preprocessed we can now predict the log of contract value with our saved model.


We can add these predictions back to the new data set, exponentiate the predicted contract value to get it back to the normal scale and then create a visual of our estimates for the 12 players.

Wrapping Up

That’s a quick walk through on how to set up a complete tdymodels workflow. This approach would work for any type of model you want to build, not just KNN! As always, the code and data are available on my GitHub page.

Frequentist & Bayesian Approaches to Regression Models by Hand – Compare and Contrast

One of the ways I try and learn things is to code them from first principles. It helps me see what is going on under the hood and also allows me wrap my head around how things work. Building regression models in R is incredibly easy using the lm() function and Bayesian regression models can be conveniently built with the same syntax in packages like {rstanarm} and {brms}. However, today I’m going to do everything by hand to get a grasp of how the Bayesian regression model works, at a very basic level. I put together the code using the mathematical presentation of these concepts from William Bolstad’s book, Introduction to Bayesian Statistics.

The entire script is available on my GITHUB page.

Loading Packages & Getting Data

I’m going to use the data from the {palmerpenguins} package and concentrate on a simple linear regression which will estimate flipper length from bill length.

### packages ------------------------------------------------------
library(tidyverse)
library(palmerpenguins)
library(patchwork)

theme_set(theme_classic())

### data ----------------------------------------------------------
data("penguins")
dat <- penguins %>%
  select(bill_length = bill_length_mm, flipper_length = flipper_length_mm) %>%
  na.omit()

head(dat)

 

EDA

These two variables share a relatively large correlation with each other.

Ordinary Least Squares Regression

First, I’ll start by building a regression model under the Frequentist paradigm, specifying no prior values on the parameters (even though in reality the OLS model is using a flat prior, where all values are equally plausible — a discussion for a different time), using the lm() function.

### Ordinary Least Squares Regression ------------------------------
fit_ols <- lm(flipper_length ~ I(bill_length - mean(dat$bill_length)), data = dat)
summary(fit_ols)

Technical Note: Since we don’t observe a bill length of 0 mm in penguins, I chose to transform the bill length data by grand mean centering it — subtracting each bill length from the population mean. This wont change the predictions from the model but does change our interpretation of the coefficients themselves (and the intercept is now directly interpretable, which it wouldn’t be if we left the bill length data in its untrasnformed state).

OLS Regression by Hand

Okay, that was easy. In a single line we constructed a model and we are up and running. But, let’s calculate this by hand so we can see where the coefficients and their corresponding standard errors came from.

# Calculate the least squares regression line by hand now
dat_stats <- dat %>%
  summarize(x_bar = mean(bill_length),
            x_sd = sd(bill_length),
            y_bar = mean(flipper_length),
            y_sd = sd(flipper_length),
            r = cor(bill_length, flipper_length),
            x2 = x_bar^2,
            y2 = y_bar^2,
            xy_bar = x_bar * y_bar,
            .groups = "drop")

dat_stats

In the above code, I’m calculating summary statistics (mean and SD) for both the independent and dependent variables, extracting their correlation coefficient, and getting their squared means and the product of their squared means to be used in downstream calculations of the regression coefficients. Here is what the output looks like:

The model intercept is simply the mean of our outcome variable (flipper length).

intercept <- dat_stats$y_bar

The coefficient for our independent variable is calculated as the correlation coefficient multiplied by the the SD of the y variable divided by the SD of the x variable.

beta <- with(dat_stats,
             r * (y_sd / x_sd))

beta

Finally, I’ll store the grand mean of bill length so that we can use it when we need to center bill length and make predictions.

x_bar <- dat_stats$x_bar
x_bar

Now that we have the intercept and beta coefficient for bill length calculated by hand we can construct the regression equation:

Notice that these are the exact values we obtained from our model using the lm() function.

We can use the two models to make predictions and show that they are the exact same.

## Make predictions with the two models
dat %>%
  mutate(pred_model = predict(fit_ols),
         pred_hand = intercept + beta * (bill_length - x_bar))

In the model estimated from the lm() function we also get a sigma parameter (residual squared error, RSE), which tells us, on average, how off the model predictions are. We can build this by hand by first calculating the squared error of the observed values and our predictions and then calculating the RSE as the square root of the sum of squared residuals divided by the model degrees of freedom.

# Calculate the estimated variance around the line
N_obs <- nrow(dat) dat %>%
  mutate(pred = intercept + beta * (bill_length - x_bar),
         resid = (flipper_length - pred),
         resid2 = resid^2) %>%
  summarize(n_model_params = 2,
            deg_freedom = N_obs - n_model_params,
            model_var = sum(resid2) / deg_freedom,
            model_sd = sqrt(model_var))

Again, our by hand calculations equal exactly as what we obtained from the lm() function, 10.6.

Bayesian Linear Regression by Hand

Now let’s turn our attention to building a Bayesian regression model.

First we need to calculate the sum of squared error for X, which we do with this equation:

ss_x = N * (mean(mu_x^2) – mu_x^2)

N <- nrow(dat)
mean_x2 <- mean(dat$bill_length^2)
mu_x2 <- mean(dat$bill_length)^2

N
mean_x2
mu_x2

ss_x <- N * (mean_x2 - mu_x2)
ss_x

Next, we need to specify some priors on the model parameters. The priors can come from a number of courses. We could set them subjectively (usually people hate this idea). Or, we could look in the scientific literature and use prior research as a guide for plausible values of the relationship between flipper length and bill length. In this case, I’m just going to specify some priors that are within the range of the data but have enough variance around them to “let the data speak”. (NOTE: you could rerun the entire analysis with different priors and smaller or larger variances to see how the model changes. I hope to do a longer blog post about priors in the future).

  • For the slope coefficient we decide to have a normal prior, N(1, 2^2)
  • For the intercept coefficient we choose a normal prior, N(180, 10^2)
  • We don’t know the true variance so we use the estimated variance from the least squares regression line
prior_model_var <- sigma(fit_ols)^2

prior_slope_mu <- 1
prior_slope_var <- 1^2

prior_intercept_mu <- 180
prior_intercept_var <- 10^2

Next, we calculate the posterior precision (1/variance) for the slope coefficient. We do it with this equation:

1/prior_slope_var + (ss_x / prior_model_var)

And then convert it to a standard deviation (which is more useful to us and easier to interpret than precision, since it is on the scale of the data).

# 1/prior_slope_var + (ss_x / prior_model_var)
posterior_slope_precision <- 1 / prior_slope_var + ss_x / prior_model_var

# Convert to SD
posterior_slope_sd <- posterior_slope_precision^-(1/2)

Once we have the precision for the posterior slope calculated we can calculate the posterior regression coefficient for the slope using this equation:

(1/prior_slope_var) / posterior_slope_var * prior_slope_mu + (ss_x / prior_model_var) / posterior_slope_var * beta

# posterior slope
posterior_slope_mu <- (1/prior_slope_var) / posterior_slope_precision * prior_slope_mu + (ss_x / prior_model_var) / posterior_slope_precision * beta
posterior_slope_mu

We can plot the prior and posterior slope values, by first simulating the posterior slope using its mu and SD values.

## Plot prior and posterior for the slope
set.seed(4)
prior_sim <- rnorm(n = 1e4, mean = prior_slope_mu, sd = sqrt(prior_slope_var))
posterior_sim <- rnorm(n = 1e4, mean = posterior_slope_mu, sd = posterior_slope_sd)

plot(density(posterior_sim),
     col = "blue",
     lwd = 4,
     xlim = c(-2, 3),
     main = "Prior & Posterior\nfor\nBayesian Regression Slope Coefficient")
lines(density(prior_sim),
      col = "red",
      lty = 2,
      lwd = 4)
legend("topleft",
       legend = c("Prior", "Posterior"),
       col = c("red", "blue"),
       lty = c(2, 1),
       lwd = c(2,2))

We notice that the prior for the slope, N(1, 1^2), had a rather broad range of plausible values. However, after observing some data and combining that observed data with our prior, we find the posterior to have settled into a reasonable range given the value, Again, if you had selected other priors or perhaps a prior with a much narrower variance, these results would be different and potentially more influenced by your prior.

Now that we have a slope parameter we need to calculate the intercept. The equations are commented out in the code below.

## Posterior precision for the intercept
# 1/prior_intercept_var + N/prior_model_var

posterior_intercept_precision <- 1/prior_intercept_var + N/prior_model_var
posterior_intercept_precision

# Convert to SD
posterior_intercept_sd <- posterior_intercept_precision^-(1/2)
posterior_intercept_sd

## Posterior intercept mean
# (1/prior_intercept_var) / posterior_intercept_precision * prior_intercept_mu + (N/prior_model_var) / posterior_intercept_precision * intercept
posterior_intercept_mu <- (1/prior_intercept_var) / posterior_intercept_precision * prior_intercept_mu + (N/prior_model_var) / posterior_intercept_precision * intercept
posterior_intercept_mu

Comparing our Bayesian regression coefficients and standard errors to the ones we obtained from the lm() function, we find that they are nearly identical.

Using the coefficients and their corresponding standard errors we can also compare the 95% Credible Interval from the Bayesian model to the 95% Confidence Interval from the OLS model (again, nearly identical).

And writing out the Bayesian model we see that it is the same as the Frequentist model.

Using the posterior mean and SDs for the slope and intercept we can simulate a distribution and plot and summarize them using quantile intervals and credible intervals, to get a picture of our uncertainty.

## Use simulation for the slope and intercept
set.seed(413)
posterior_intercept_sim <- rnorm(n = 1e4, mean = posterior_intercept_mu, sd = posterior_slope_sd)
posterior_slope_sim <- rnorm(n = 1e4, mean = posterior_slope_mu, sd = posterior_slope_sd)

par(mfrow = c(1, 2))
hist(posterior_intercept_sim)
hist(posterior_slope_sim)

Making Predictions

Now that we have our models specified it’s time to make some predictions!

## Add all predictions to the original data set
dat_final <- dat %>%
  mutate(pred_ols = predict(fit_ols),
         pred_ols_by_hand = intercept + beta * (bill_length - x_bar),
         pred_bayes_by_hand = posterior_intercept_mu + posterior_slope_mu * (bill_length - x_bar))

head(dat_final, 10)

Because the equations were pretty much identical we can see that all three models (OLS with lm(), OLS by hand, and Bayes by hand) produce the same predicted values for flipper length.

We can also visualize these predictions along with the prediction intervals.

## Predictions with uncertainty from the OLS model
ols_preds <- dat_final %>%
  bind_cols(predict(fit_ols, newdata = ., interval = "prediction")) %>%
  ggplot(aes(x = fit, y = flipper_length)) +
  geom_point(size = 2) +
  geom_ribbon(aes(ymin = lwr, ymax = upr),
              alpha = 0.4) +
  geom_smooth(method = "lm",
              se = FALSE) +
  ggtitle("OLS Predictions with 95% Prediction Intervals")

## Now predict with the Bayesian Model
# The error is calculated as:
# residual_variance = prior_model_var +  posterior_intercept_sd^2 + posterior_slope^2*(x - x_bar)
bayes_preds <- dat %>%
  mutate(fit = posterior_intercept_mu + posterior_slope_mu * (bill_length-x_bar),
         error_var = prior_model_var + posterior_intercept_sd^2 + posterior_slope_sd^2 * (bill_length - x_bar),
         rse = sqrt(error_var),
         lwr = fit - qt(p = 0.975, df = N - 2) * rse,
         upr = fit + qt(p = 0.975, df = N - 2) * rse) %>%
  ggplot(aes(x = fit, y = flipper_length)) +
  geom_point(size = 2) +
  geom_ribbon(aes(ymin = lwr, ymax = upr),
              alpha = 0.4) +
  geom_smooth(method = "lm",
              se = FALSE) +
  ggtitle("Bayesian Predictions with 95% Prediction Intervals")


ols_preds | bayes_preds

Okay, I know what you are thinking. The equations were the same, the predictions are the same, and the prediction intervals are the same….What’s the big deal?!

The big deal is that the Bayesian model gives us more flexibility. Not only can we specify priors, allowing us to have a higher weighting on more plausible values (based on prior data, prior research, domain expertise, etc.); but, we can also produce an entire posterior distribution for the predictions.

Let’s take a single row of observation from the data and make a prediction on it, complete with 95% credible intervals, using our Bayesian model.

single_obs <- dat %>% slice(36)
single_obs

pred_bayes <- single_obs %>%
  mutate(fit = posterior_intercept_mu + posterior_slope_mu * (bill_length-x_bar),
         error_var = prior_model_var + posterior_intercept_sd^2 + posterior_slope_sd^2 * (bill_length - x_bar),
         rse = sqrt(error_var),
         lwr = fit - qt(p = 0.975, df = N - 2) * rse,
         upr = fit + qt(p = 0.975, df = N - 2) * rse) 

Next, we simulate 10,000 observations using the mean prediction (fit) and the rse from the table above and summarize the results

set.seed(582)
pred_bayes_sim <- rnorm(n = 1e4, mean = pred_bayes$fit, sd = pred_bayes$rse)

mean(pred_bayes_sim)
quantile(pred_bayes_sim, probs = c(0.5, 0.025, 0.975))
mean(pred_bayes_sim) + qt(p = c(0.025, 0.975), df = N - 2) * sd(pred_bayes_sim)

Our mean value, the 95% quantile intervals, and then 95% credible intervals are the similar to the table above (they should be since we simulated with those parameters). We will also get similar values if we use our OLS model and make a prediction on this observation with prediction intervals.

But, because we simulated a full distribution, from the Bayesian prediction we also get 10,000 plausible values for the estimate of flipper length based on the observed bill length.

This offers us flexibility to make more direct statements about the predictive distribution. For example, we might ask, “What is the probability that the predicted flipper length is greater than 210 mm?” and we could directly compute this from the simulated data! This provides us with a number of options if we were using this model to make decisions on and perhaps have threshold values or ranges of critical importance where we need to know how much of the density of our predicted distribution is below or above the threshold or falls outside of the region of importance.

Wrapping Up

That’s a little bit on working with regression models by hand, from first principles. The Bayesian approach affords us a lot of control and flexibility over the model, via priors (more on that in a future blog post) and the predictions (via simulation). The Bayesian regression model calculated here was a simple approach and did not require Markov Chain Monte Carlo (MCMC), which would be run when using {rstanarm} and {brms}. If you are curious how this works, I wrote a previous blog on using a GIBBS Sampler to estimate the posterior distributions for a regression model, CLICK HERE. But, even with this simple approach we are afforded the ability to specify priors on the model parameters of the deterministic/fixed component of the model. We did not specify a prior on the stochastic/error part of the model (we used the sigma value calculated from the OLS model — which we also calculated by hand). This was necessary in this simple conjugate approach because the normal distribution is a two parameter distribution so we needed to treat one of the parameters as fixed (in this case the SD). Had we used an MCMC approach, we could have further specified a prior distribution on the model error.

If you notice any errors, feel free to drop me an email.

The code is available on my GITHUB page.

Calculating Confidence Intervals in R

A favorite paper of mine is the 1986 paper by Gardner and Altman regarding confidence intervals and estimation as a more useful way of reporting data than a dichotomous p-value:

Gardner, MJ. Altman, DG. (1986). Confidence intervals rather than P values: Estimation rather than hypothesis testing. Brit Med J; 292:746-750.

In this paper, Gardner and Altman discuss three main points for either moving away from or supplementing statistical reporting with p-values:

  1. Research often focuses on null hypothesis significance testing with the goal being to identify statistically significant results.
  2. However, we are often interested in the magnitude of the factor of interest.
  3. Given that research deals with samples of a broader population, the readers are not only interested in the observed magnitude of the estimand but also the degree of variability and plausible range of values for the population. This variability can be quantified using confidence intervals.

Aside from the paper providing a clear explanation of the issue at hand, their appendix offers the equations to calculate confidence intervals for means, differences in means, proportions, and differences in proportions. Thus, I decided to compile the appendix in an R script for those looking to code confidence intervals (and not have to rely on pre-built functions).

All of this code is available on my GITHUB page.

Confidence Intervals for Means and Differences

Single Sample

  1. Obtain the mean
  2. Calculate the standard error (SE) of the mean as SE = SD/sqrt(N)
  3. Multiply by a t-critical value specific to the level of confidence of interest and the degrees of freedom (DF) for the single sample, DF = N – 1

The confidence intervals are calculated as:

Low = mean – t_{crit} * SE
High = mean + t_{crit} * SE

Example

We collect data on 30 participants on a special test of strength and observe a mean of 40 and standard deviation of 10. We want to calculate the 90% confidence interval.

The steps above can easily be computed in R. First, we write down the known info (the data that is provided to us). We then calculate the standard error and degrees of freedom (N – 1). To obtain the critical value for our desired level of interest, we use the t-distribution is specific to the degrees of freedom of our data.

## Known info
N <- 30
avg <- 40
SD <- 10

## Calculate DF and SE
SE <- SD / sqrt(N)
DF <- N - 1

## Calculate the confidence level of interest (90%), the amount of data in each of the the two tails ((1 - level of interest) / 2) and the t-critical value
level_of_interest <- 0.90
tail_value <- (1 - level_of_interest)/2

t_crit <- abs(qt(tail_value, df = DF))
t_crit

## Calculate the 90% CI
low90 <- round(avg - t_crit * SE, 1)
high90 <- round(avg + t_crit * SE, 1)

cat("The 90% Confidence Interval is:", low90, " to ", high90)

Two Samples

  1. Obtain the sample mean and standard deviations for the two samples
  2. Pool the estimate of the standard deviation:s = sqrt(((n_1 – 1)*s^2_1 + n_2 – 1)*s^2_2) / (n_1 + n_2 – 2))
  3. Calculate the SE for the difference:SE_{diff} = s * sqrt(1/n_1 + 1/n_2)
  4. Calculate the confidence interval as:

Low = (x_1 – x_2) – t_{crit} * SE_{diff}
High = (x_1 – x_2) + t_{crit} * SE_{diff}

Example

The example in the paper provides the following info:

  • Blood pressure levels were measured in 100 diabetic and 100 non-diabetic men aged 40-49 years old.
  • Mean systolic blood pressure was 146.4 mmHg (SD = 18.5) in diabetics and 140.4 mmHg (SD = 16.8) in non-diabetics.

Calculate the 95% CI.

## store the known data
N_diabetic <- 100
N_non_diabetic <- 100
diabetic_avg <- 146.4
diabetic_sd <-18.5
non_diabetic_avg <- 140.4
non_diabetic_sd <- 16.8

## calculate the difference in means, the pooled SD, and the SE of diff
group_diff <- diabetic_avg - non_diabetic_avg

pooled_sd <- sqrt(((N_diabetic - 1)*diabetic_sd^2 + (N_non_diabetic - 1)*non_diabetic_sd^2) / (N_diabetic + N_non_diabetic - 2))

se_diff <- pooled_sd * sqrt(1/N_diabetic + 1/N_non_diabetic)

## Calculate the confidence level of interest (95%), the amount of data in each of the the two tails ((1 - level of interest) / 2) and the t-critical value
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest)/2

t_crit <- abs(qt(tail_value, df = N_diabetic + N_non_diabetic - 2))
t_crit

## Calculate the 95% CI
low95 <- round(group_diff - t_crit * se_diff, 1)
high95 <- round(group_diff + t_crit * se_diff, 1)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

Confidence Intervals for Proportions

Single Sample

  1. Obtain the proportion for the population
  2. Calculate the SE of the proportion, SE = sqrt((p * (1-p)) / N)
  3. Obtain the z-critical value from a standard normal distribution for the level of confidence of interest (since the value for a proportion does not depend on sample size as it does for means).
  4. Calculate the confidence interval:

low = p – z_{crit} * SE
high = p + z_{crit} * SE

Example

We observe a basketball player with 80 field goal attempts and a FG% of 39%. Calculate the 90% CI.

## Store the known info
N <- 80
fg_pct <- 0.39

## Calculate SE
se <- sqrt((fg_pct * (1 - fg_pct)) / N)

## Calculate z-critical value for 50% confidence
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest) / 2
z_crit <- qnorm(p = tail_value, lower.tail = FALSE)

## Calculate the 95% CI
low95 <- round(fg_pct - z_crit * se, 3)
high95 <- round(fg_pct + z_crit * se, 3)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

Two Samples

  1. Calculate the difference in proportions between the two groups
  2. Calculate the SE of the difference in proportions:SE_{diff} = sqrt(((p_1 * (1-p_1)) / n_1) + ((p_2 * (1 – p_2)) / n_2))
  3. Calculate the z-critical value for the level of interest
  4. Calculate the confidence interval as:

low = (p_1 – p_2) – (z_{crit} * se_{diff})
high = (p_1 – p_2) + (z_{crit} * se_{diff})

Example of two unpaired samples

The study provides the following table of example data:

data.frame(
  response = c("improvement", "no improvement", "total"),
  treatment_A = c(61, 19, 80),
  treatment_B = c(45, 35, 80)
)

The difference we are interested in is between the proportion who improved in treatment A and the proportion of those who improved in treatment B.

## Obtain the two proportions from the table and total sample sizes
pr_A <- 61/80
n_A <- 80
pr_B <- 45/80
n_B <- 80

## calculate the difference in proportions
diff_pr <- pr_A - pr_B

## Calculate the SE
se_diff <- sqrt((pr_A * (1 - pr_A))/n_A + (pr_B * (1 - pr_B))/n_B)

## Get z-critical value for 95% confidence
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest) / 2
z_crit <- qnorm(p = tail_value, lower.tail = FALSE)

## Calculate the 95% CI
low95 <- round(diff_pr - z_crit * se_diff, 3)
high95 <- round(diff_pr + z_crit * se_diff, 3)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

 

Example for two paired samples

We can organize the data in a table like this:

data.frame(
  test_1 = c("Present", "Present", "Absent", "Absent"),
  test_2 = c("Present", "Absent", "Present", "Absent"),
  number_of_subjects = c("a", "b", "c", "d")
)

Let’s say we measured a group of subjects for a specific disease twice in a study. A subject either has the disease (present) or does not (absent) in the two time points. We observe the following data:

dat <- data.frame(
  test_1 = c("Present", "Present", "Absent", "Absent"),
  test_2 = c("Present", "Absent", "Present", "Absent"),
  number_of_subjects = c(10, 25, 45, 5)
)

dat

## total sample size
N <- sum(dat$number_of_subjects)

If we care about comparing those that had the disease (Present) on both occasions (both Test1 and Test2) we calculate them as:

p_1 = (a + b) / N
p_2 = (a + c) / N
Diff = p_1 – p_2

The SE of the difference is:

SE_{diff} = 1/N * sqrt(b + c – (b-c)^2/N)

## Obtain the info we need from the table
p1 <- (10 + 25) / N
p2 <- (10 + 45) / N
diff_prop <- p1 - p2
diff_prop

## Calculate the SE of the difference
se_diff <- 1 / N * sqrt(25 + 45 - (25+45)^2/N)

## Get z-critical value for 95% confidence
level_of_interest <- 0.95
tail_value <- (1 - level_of_interest) / 2
z_crit <- qnorm(p = tail_value, lower.tail = FALSE)

## Calculate the 95% CI
low95 <- round(diff_prop - z_crit * se_diff, 3)
high95 <- round(diff_prop + z_crit * se_diff, 3)

cat("The 95% Confidence Interval is:", low95, " to ", high95)

As always, all of this code is available on my GITHUB page.

And, if you’d like to read the full paper, you can find it here:

Gardner, MJ. Altman, DG. (1986). Confidence intervals rather than P values: Estimation rather than hypothesis testing. Brit Med J; 292:746-750.