Category Archives: R Tips & Tricks

TidyX Episode 117: Creating Unique Participant IDs & Observation Groups in tidyverse

This week, Ellis Hughes and I go back to our {tidyverse} roots to do a series of episodes covering a number of different data engineering and data cleaning types of activities we commonly do.

This week, we show how to set up unique participant IDs using {tidyverse} and then we use integer division to produce groupings of observations by participant so that data can be aggregated for research purposes.

If you have a data engineering or data cleaning issue you are currently working through in {tidyverse} and want some help, feel free to comment on the YouTube channel and we can see about working on it for an upcoming episode.

To watch our screen cast, CLICK HERE.

To access our code, CLICK HERE.

R Tips & Tricks: Write Data to Separate Excel Sheet Tabs

When working with colleagues and managers, sometimes they prefer to have data or analysis written out to an excel workbook so that they can look at the information and sort or filter it, however they see fit. Rather than saving each output to a single csv file and then spending time copying and pasting them all into a single excel sheet (different data outputs on different tabs), let’s see how we can write everything out to a multi-tab excel sheet from R in one shot using the {openxlsx} package.

Data

We will simulate some sports data. In this example, we have two data sources:

  1. Data about teams
  2. Data about players
### Libraries -----------------------
library(tidyverse)
library(openxlsx)


### Create Data -----------------------

teams <- data.frame(
  team = c("Bats", "Sharks", "Tigers", "Bisons"),
  stat1 = round(rnorm(n = 4, mean = 100, sd = 20), 1),
  stat2 = round(rnorm(n = 4, mean = 250, sd = 70), 1),
  stat3 = round(rnorm(n = 4, mean = 0, sd = 3), 2)
)

players <- data.frame(
  player = c("Tom", "Joe", "Karl", "Bob", "Ben", "Albert", "Simon", "Harold", "Ken", "Cal"),
  cool_stat1 = round(rnorm(n = 10, mean = 300, sd = 10), 1),
  cool_stat1 = round(rnorm(n = 10, mean = 20, sd = 5), 1),
  cool_stat1 = round(rnorm(n = 10, mean = 500, sd = 50), 1),
  cool_stat1 = round(rnorm(n = 10, mean = 10, sd = 15), 1)
)

 

Store the data frames in a named list

Next, we need to store each of the data frames we want in our excel output into a named list. Naming of the elements in the list is important as these will be the names given to the excel tabs.

## Store each data frame as a named list -----------------------
list_of_dataframes <- list(teams = teams,
                   players = players)

 

Write the Data to an Excel Workbook

Once we have the data sets we are interested in structured in a named list we are ready to use the {openxlsx} package to write the data to a multi-tab excel workbook.

First, we create an empty excel workbook object to store our results.

## Create an empty excel workbook object -----------------------
blank_excel <- createWorkbook()

 

Next, we loop over our list of data sets and store each data set on its own named tab.

## loop over the list storing each data set on its own tab in the blank excel workbook -----------
Map(function(df, tab_name){     
  
  addWorksheet(blank_excel, tab_name)
  writeData(blank_excel, tab_name, df)
  }, 
  
  list_of_dataframes, names(list_of_dataframes)
)

 

If you’ve done it correctly, in your R console you should see an output indicating each of the data sets that were placed into the workbook.

 

 

Finally, we want to save the results into an excel workbook that we can email out to our co-workers. We use the saveWorkbook() function to save the results to our working directory.

## Save the now populated excel workbook --------------------------------------------
saveWorkbook(blank_excel, file = "league_data.xlsx", overwrite = TRUE)

 

When we open the excel  workbook we see that we have both tabs, properly named for the data they contain.

Shiny – User Defined Chart Parameters

A colleague was working on a web app for his basketball team and asked me if there was a way to create a {shiny} web app that allowed the user to define which parameters they would like to see on the plot. I figured this would be something others might be interested in as well, so here we go!

Load Packages, Helper Functions & Data

I’ll use data from the Lahman baseball database (seasons 2017 – 2019). I’m also going to create two helper functions, one for calculating the z-scores for our stats of interest and one for calculating the t-value from the z-score. The t-value will put the z-score on a 0 to 100 scale for plotting purposes in our polar plot. Additionally, we will use these standardized scores to conditionally format colors on our {gt} table (but we will hide the standardized columns so that the user only sees the raw data and colors). Finally, I’m going to create both a wide and long format of the data as it will be easier to use one or the other, depending on the type of plot or table I am building.

#### Load packages ------------------------------------------------
library(tidyverse)
library(shiny)
library(Lahman)
library(gt)
library(plotly)

theme_set(theme_minimal() + 
            theme(
              axis.text = element_text(face = "bold", size = 12),
              legend.title = element_blank(),
              legend.position = "none"
            ) )

#### helper functions -------------------------------------------

z_score <- function(x){
  z = (x - mean(x, na.rm = T)) / sd(x, na.rm = T)
  return(z)
}

t_score <- function(x){ t = (x * 10) + 50 t = ifelse(t > 100, 100, 
             ifelse(t < 0, 0, t))
  return(t)
}


#### Get Data ---------------------------------------------------

dat <- Batting %>%
  filter(between(yearID, left = 2017, right = 2019),
         AB >= 200) %>% 
  group_by(yearID, playerID) %>%
  summarize(across(.cols = G:GIDP,
         ~sum(.x)),
         .groups = "drop") %>%
  mutate(ba = H / AB,
         obp = (H + BB + HBP) / (AB + HBP + SF),
         slg = ((H - X2B - X3B - HR) + X2B*2 + X3B*3 + HR*4) / AB,
         ops = obp + slg,
         hr_rate = H / AB) %>%
  select(playerID, yearID, AB, ba:hr_rate) %>%
  mutate(across(.cols = ba:hr_rate,
                list(z = z_score)),
         across(.cols = ba_z:hr_rate_z,
                list(t = t_score))) %>%
  left_join(People %>%
              mutate(name = paste(nameLast, nameFirst, sep = ", ")) %>%
              select(playerID, name)) %>%
  relocate(name, .before = yearID)


dat_long <- Batting %>%
  filter(between(yearID, left = 2017, right = 2019),
         AB >= 200) %>% 
  group_by(playerID) %>%
  summarize(across(.cols = G:GIDP,
                   ~sum(.x)),
            .groups = "drop") %>%
  mutate(ba = H / AB,
         obp = (H + BB + HBP) / (AB + HBP + SF),
         slg = ((H - X2B - X3B - HR) + X2B*2 + X3B*3 + HR*4) / AB,
         ops = obp + slg,
         hr_rate = H / AB) %>%
  select(playerID, AB, ba:hr_rate) %>%
  mutate(across(.cols = ba:hr_rate,
                list(z = z_score)),
         across(.cols = ba_z:hr_rate_z,
                list(t = t_score))) %>%
  left_join(People %>%
              mutate(name = paste(nameLast, nameFirst, sep = ", ")) %>%
              select(playerID, name)) %>%
  relocate(name, .before = AB) %>%
  select(playerID:AB, ends_with("z_t")) %>%
  pivot_longer(cols = -c(playerID, name, AB),
               names_to = "stat") %>%
  mutate(stat = case_when(stat == "ba_z_t" ~ "BA",
                          stat == "obp_z_t" ~ "OBP",
                          stat == "slg_z_t" ~ "SLG",
                          stat == "ops_z_t" ~ "OPS",
                          stat == "hr_rate_z_t" ~ "HR Rate"))


dat %>%
  head()

dat_long %>%
  head()

 

The Figures for our App

Before I build the {shiny} app, I wanted to first construct the three figures I will include. The code for these will be accessible in Github, but here is what they look like:

  • For the polar plot, I will allow the user to define which variables they want on the chart.
  • For the time series plot, I am going to create an interactive {plotly} chart that allows the user to select the stat they want to see and then hover over the player’s points and obtain information like the raw value and the number of at bats in the given season via a simple tool tip.
  • The table, as discussed above, will user conditional formatting to provide the user with extra context about how that player performed relative to his peers in a given season.

Because I don’t like to clutter up my {shiny} apps, I tend to build my plots and tables into custom functions. That way, all I need to do is set up a reactive() in the server to obtain the user selected data and then call the function on that data. Here are the functions for the three figures above.

## table function
tbl_func <- function(NAME){ dat %>%
  filter(name == NAME) %>%
  select(yearID, AB:hr_rate, ends_with("z_t")) %>%
  gt(rowname_col = "yearID") %>%
  fmt_number(columns = ba:hr_rate,
             decimals = 3) %>%
  cols_label(
    AB = md("**AB**"),
    ba = md("**Batting Avg**"),
    obp = md("**OBP**"),
    slg = md("**SLG**"),
    ops = md("**OPS**"),
    hr_rate = md("**Home Run Rate**")
  ) %>%
  tab_header(title = NAME) %>%
  opt_align_table_header(align = "left") %>%
  tab_options(column_labels.border.top.color = "transparent",
              column_labels.border.top.width = px(3),
              table.border.top.color = "transparent",
              table.border.bottom.color = "transparent") %>%
  cols_align(align = "center") %>%
  cols_hide(columns = ends_with("z_t")) %>%
    tab_style(
      style = cell_fill(color = "palegreen"),
      location = cells_body(
        columns = ba,
        rows = ba_z_t > 60
      )
    )  %>%
    tab_style(
      style = cell_fill(color = "red"),
      location = cells_body(
        columns = ba,
        rows = ba_z_t < 40 ) ) %>%
    tab_style(
      style = cell_fill(color = "palegreen"),
      location = cells_body(
        columns = obp,
        rows = obp_z_t > 60
      )
    )  %>%
    tab_style(
      style = cell_fill(color = "red"),
      location = cells_body(
        columns = obp,
        rows = obp_z_t < 40 ) ) %>%
    tab_style(
      style = cell_fill(color = "palegreen"),
      location = cells_body(
        columns = slg,
        rows = slg_z_t > 60
      )
    )  %>%
    tab_style(
      style = cell_fill(color = "red"),
      location = cells_body(
        columns = slg,
        rows = slg_z_t < 40 ) ) %>%
    tab_style(
      style = cell_fill(color = "palegreen"),
      location = cells_body(
        columns = ops,
        rows = ops_z_t > 60
      )
    )  %>%
    tab_style(
      style = cell_fill(color = "red"),
      location = cells_body(
        columns = ops,
        rows = ops_z_t < 40 ) ) %>%
    tab_style(
      style = cell_fill(color = "palegreen"),
      location = cells_body(
        columns = hr_rate,
        rows = hr_rate_z_t > 60
      )
    )  %>%
    tab_style(
      style = cell_fill(color = "red"),
      location = cells_body(
        columns = hr_rate,
        rows = hr_rate_z_t < 40
      )
    ) 
}


## Polar plot function
polar_plt <- function(NAME, STATS){ dat_long %>%
    filter(name == NAME,
           stat %in% STATS) %>%
    ggplot(aes(x = stat, y = value, fill = stat)) +
    geom_col(color = "white", width = 0.75) +
    coord_polar(theta = "x") +
    geom_hline(yintercept = seq(50, 50, by = 1), size = 1.2) +
    labs(x = "", y = "") +
    ylim(0, 100)
  
} 


## time series plot function
time_plt <- function(NAME, STAT){
  
  STAT <- case_when(STAT == "BA" ~ "ba",
                    STAT == "OBP" ~ "obp",
                    STAT == "SLG" ~ "slg",
                    STAT == "OPS" ~ "ops",
                    STAT == "HR Rate" ~ "hr_rate")
  
  stat_z <- paste0(STAT, "_z")
  
  p <- dat %>% 
    filter(name == NAME) %>%
    select(yearID, AB, STAT, stat_z) %>%
    setNames(., c("yearID", "AB", "STAT", "stat_z")) %>%
    ggplot(aes(x = as.factor(yearID), 
               y = stat_z,
               group = 1,
               label = NAME,
               label2 = AB,
               lable3 = STAT)) +
    geom_hline(yintercept = 0,
               size = 1.1,
               linetype = "dashed") +
    geom_line(size = 1.2) +
    geom_point(shape = 21,
               size = 6,
               color = "black",
               fill = "white") +
    ylim(-4, 4) 
  
  
  ggplotly(p)
  
}

 

Build the {shiny} app

The below code will construct the {shiny} app. We allow the user to select a player, select the stats of interest for the polar plot, and select the stat they’d like to track over time.

If you’d like to see a video of the app in use, CLICK HERE <shiny – user defined chart parameters>

If you want to run this yourself or build one similar to it you can access my code on GitHub.

 

#### Shiny App ---------------------------------------------------------------

## User Interface
ui <- fluidPage(
  
  titlePanel("MLB Hitters Shiny App\n2017-2019"),
  
  
  sidebarPanel(width = 3,
             selectInput("name",
                             label = "Choose a Player:",
                             choices = unique(dat$name),
                             selected = NULL,
                             multiple = FALSE),
              
              selectInput("stat",
                          label = "Choose stats for polar plot:",
                          choices = unique(dat_long$stat),
                          selected = NULL,
                          multiple = TRUE),
              
              selectInput("time_stat",
                          label = "Choose stat for time series:",
                          choices = unique(dat_long$stat),
                          selected = NULL,
                          multiple = FALSE)
  ),
  
  
  mainPanel(
    
    gt_output(outputId = "tbl"),
    
    fluidRow(
      
      column(6, plotOutput(outputId = "polar")),
      column(6, plotlyOutput(outputId = "time"))
    )
    
  )
)



server <- function(input, output){
  
  ## get player selected for table
  NAME <- reactive({ dat_long %>%
      filter(name == input$name) %>%
      distinct(name, .keep_all = FALSE) %>%
      pull(name)
    })
  
  ## get stats for polar plot
  polar_stats <- reactive({ dat_long %>%
      filter(stat %in% c(input$stat)) %>%
      pull(stat)
    
  })
  
  ## get stat for time series
  ts_stat <- reactive({ dat %>%
      select(ba:hr_rate) %>%
      setNames(., c("BA", "OBP", "SLG", "OPS", "HR Rate")) %>%
      select(input$time_stat) %>% 
      colnames()
    
  })
  
  ## table output
  output$tbl <- render_gt(
      tbl_func(NAME = NAME())
  )
  
  ## polar plot output
  output$polar <- renderPlot(
    
    polar_plt(NAME = NAME(),
              STAT = polar_stats())
    
  )
  
  ## time series plot output
  output$time <- renderPlotly(
    
    time_plt(NAME = NAME(),
             STAT = ts_stat())
    
  )

  
}





shinyApp(ui, server)

tidymodels – Extract model coefficients for all cross validated folds

As I’ve discussed previously, we sometimes don’t have enough data where doing a train/test split makes sense. As such, we are better off building our model using cross-validation. In previous blog articles, I’ve talked about how to build models using cross-validation within the {tidymodels} framework (see HERE and HERE). In my prior examples, we fit the model over the cross-validation folds and then constructed the final model that we could then use to make predictions with, later on.

Recently, I ran into a situation where I wanted to see what the model coefficients look like across all of the cross-validation folds. So, I decided to make a quick blog post on how to do this, in case it is useful to others.

Load Packages & Data

We will use the {mtcars} package from R and build a regression model, using several independent variables, to predict miles per gallon (mpg).

### Packages -------------------------------------------------------

library(tidyverse)
library(tidymodels)

### Data -------------------------------------------------------

dat <- mtcars dat %>%
  head()

 

Create Cross-Validation Folds of the Data

I’ll use 10-fold cross validation.

### Modelling -------------------------------------------------------
## Create 10 Cross Validation Folds

set.seed(1)
cv_folds <- vfold_cv(dat, v = 10)
cv_folds

Specify a linear model and set up the model formula

## Specify the linear regression engine
## model specs
lm_spec <- linear_reg() %>%
  set_engine("lm") 


## Model formula
mpg_formula <- mpg ~ cyl + disp + wt + drat

Set up the model workflow  and fit the model to the cross-validated folds

## Set up workflow
lm_wf <- workflow() %>%
  add_formula(mpg_formula) %>%
  add_model(lm_spec) 

## Fit the model to the cross validation folds
lm_fit <- lm_wf %>%
  fit_resamples(
    resamples = cv_folds,
    control = control_resamples(extract = extract_model, save_pred = TRUE)
  )

Extract the model coefficients for each of the 10 folds (this is the fun part!)

Looking at the lm_fit output above, we see that it is a tibble consisting of various nested lists. The id column indicates which cross-validation fold the lists in each row pertain to. The model coefficients for each fold are stored in the .extracts column of lists. Instead of printing out all 10, let’s just have a look at the first 3 folds to see what they look like.

lm_fit$.extracts %>% 
  .[1:3]

There we see in the .extracts column, <lm> indicating the linear model for each fold. With a series of unnesting we can snag the model coefficients and then put them into a tidy format using the {broom} package. I’ve commented out each line of code below so that you know exactly what is happening.

# Let's unnest this and get the coefficients out
model_coefs <- lm_fit %>% 
  select(id, .extracts) %>%                    # get the id and .extracts columns
  unnest(cols = .extracts) %>%                 # unnest .extracts, which produces the model in a list
  mutate(coefs = map(.extracts, tidy)) %>%     # use map() to apply the tidy function and get the coefficients in their own column
  unnest(coefs)                                # unnest the coefs column you just made to get the coefficients for each fold

model_coefs

Now that we have a table of estimates, we can plot the coefficient estimates and their 95% confidence intervals. The term column indicates each variable. We will remove the (Intercept) for plotting purposes.

Plot the Coefficients

## Plot the model coefficients and 2*SE across all folds
model_coefs %>%
  filter(term != "(Intercept)") %>%
  select(id, term, estimate, std.error) %>%
  group_by(term) %>%
  mutate(avg_estimate = mean(estimate)) %>%
  ggplot(aes(x = id, y = estimate)) +
  geom_hline(aes(yintercept = avg_estimate),
             size = 1.2,
             linetype = "dashed") +
  geom_point(size = 4) +
  geom_errorbar(aes(ymin = estimate - 2*std.error, ymax = estimate + 2*std.error),
                width = 0.1,
                size = 1.2) +
  facet_wrap(~term, scales = "free_y") +
  labs(x = "CV Folds",
       y = "Estimate ± 95% CI",
       title = "Regression Coefficients ± 95% CI for 10-fold CV",
       subtitle = "Dashed Line = Average Coefficient Estimate over 10 CV Folds per Independent Variable") +
  theme_classic() +
  theme(strip.background = element_rect(fill = "black"),
        strip.text = element_text(face = "bold", size = 12, color = "white"),
        axis.title = element_text(size = 14, face = "bold"),
        axis.text.x = element_text(angle = 60, hjust = 1, face = "bold", size = 12),
        axis.text.y = element_text(face = "bold", size = 12),
        plot.title = element_text(size = 18),
        plot.subtitle = element_text(size = 16))

Now we can clearly see the model coefficients and confidence intervals for each of the 10 cross validated folds.

Wrapping Up

This was just a quick and easy way of fitting a model using cross-validation to extract out the model coefficients for each fold. Often, this is probably not necessary as you will fit your model, evaluate your model, and be off and running. However, there may be times where more specific interrogation of the model is required or, you might want to dig a little deeper into the various outputs of the cross-validated folds.

All of the code is available on my GitHub page.

If you notice any errors in code, please reach out!

 

Optimization Algorithms in R – returning model fit metrics

Introduction

A colleague had asked me if I knew of a way to obtain model fit metrics, such as AIC or r-squared, from the optim() function. First, optim() provides a general-purpose method of optimizing an algorithm to identify the best weights for either minimizing or maximizing whatever success metric you are comparing your model to (e.g., sum of squared error, maximum likelihood, etc.). From there, it continues until the model coefficients are optimal for the data.

To make optim() work for us, we need to code the aspects of the model we are interested in optimizing (e.g., the regression coefficients) as well as code a function that calculates the output we are comparing the results to (e.g., sum of squared error).

Before we get to model fit metrics, let’s walk through how optim() works by comparing our results to a simple linear regression. I’ll admit, optim() can be a little hard to wrap your head around (at least for me, it was), but building up a simple example can help us understand the power of this function and how we can use it later on down the road in more complicated analysis.

Data

We will use data from the Lahman baseball data base. I’ll stick with all years from 2006 on and retain only players with a minimum of 250 at bats per season.

library(Lahman)
library(tidyverse)

data(Batting)

df <- Batting %>% 
  filter(yearID >= 2006,
         AB >= 250)

head(df)

 

Linear Regression

First, let’s just write a linear regression to predict HR from Hits, so that we have something to compare our optim() function against.

fit_lm <- lm(HR ~ H, data = df)
summary(fit_lm)

plot(df$H, df$HR, pch = 19, col = "light grey")
abline(fit_lm, col = "red", lwd = 2)

The above model is a simple linear regression model that fits a line of best fit based on the squared error of predictions to the actual values.

(NOTE: We can see from the plot that this relationship is not really linear, but it will be okay for our purposes of discussion here.)

We can also use an optimizer to solve this.

Optimizer

To write an optimizer, we need two functions:

  1.  A function that allow us to model the relationship between HR and H and identify the optimal coefficients for the intercept and the slope that will help us minimize the residual between the actual and predicted values.
  2.  A function that helps us keep track of the error between actual and predicted values as optim() runs through various iterations, attempting a number of possible coefficients for the slope and intercept.

Function 1: A linear function to predict HR from hits

hr_from_h <- function(H, a, b){
  return(a + b*H)
}

This simple function is just a linear equation and takes the values of our independent variable (H) and a value for the intercept (a) and slope (b). Although, we can plug in numbers and use the function right now, the values of a and b have not been optimized yet. Thus, the model will return a weird prediction for HR. Still, we can plug in some values to see how it works. For example, what if the person has 30 hits.

hr_from_h(H = 30, a = 1, b = 1)

Not a really good prediction of home runs!

Function 2: Sum of Squared Error Function

Optimizers will try and identify the key weights in the function by either maximizing or minimizing some value. Since we are using a linear model, it makes sense to try and minimize the sum of the squared error.

The function will take 3 inputs:

  1.  A data set of our dependent and independent variables
  2.  A value for our intercept (a)
  3.  A value for the slope of our model (b)
sum_sq_error <- function(df, a, b){
  
  # make predictions for HR
  predictions <- with(df, hr_from_h(H, a, b))
  
  # get model errors
  errors <- with(df, HR - predictions)
  
  # return the sum of squared errors
  return(sum(errors^2))
  
}

 

Let’s make a fake data set of a few values of H and HR and then assign some weights for a and b to see if the sum_sq_error() produces a single sum of squared error value.

fake_df <- data.frame(H = c(35, 40, 55), 
                        HR = c(4, 10, 12))

sum_sq_error(fake_df,
             a = 3, 
             b = 1)

It worked! Now let’s write an optimization function to try and find the ideal weights for a and b that minimize that sum of squared error. One way to do this is to create a large grid of values, write a for loop and let R plug along, trying each value, and then find the optimal values that minimize the sum of squared error. The issue with this is that if you have models with more independent variables, it will take really long. A more efficient way is to write an optimizer that can take care of this for us.

We will use the optim() function from base R.

The optim() function takes 2 inputs:

  1.  A numeric vector of starting points for the parameters you are trying to optimize. These can be any values.
  2.  A function that will receive the vector of starting points. This function will contain all of the parameters that we want to optimize. This function will take our sum_sq_error() function and it will get passed the starting values for a and b and then find their values that minimize the sum of squared error.
optimizer_results <- optim(par = c(0, 0),
                           fn = function(x){
                             sum_sq_error(df, x[1], x[2])
                             }
                           )

Let’s have a look at the results.

In the output:

  • value tells us the sum of the squared error
  • par tells us the weighting for a (intercept) and b (slope)
  • counts tells us how many times the optimizer ran the function. The gradient is NA here because we didn’t specify a gradient argument in our optimizer
  • convergence tells us if the optimizer found the optimal values (when it goes to 0, that means everything worked out)
  • message is any message that R needs to inform us about when running the optimizer

Let’s focus on par and value since those are the two values we really want to know about.

First, notice how the values for a and b are nearly the exact same values we got from our linear regression.

a_optim_weight <- optimizer_results$par[1]
b_optim_weight <- optimizer_results$par[2]

a_reg_coef <- fit_lm$coef[1]
b_reg_coef <- fit_lm$coef[2]

a_optim_weight
a_reg_coef

b_optim_weight
b_reg_coef

Next, we can see that the sum of squared error from the optimizer is the same as the sum of squared error from the linear regression.

sse_optim <- optimizer_results$value
sse_reg <- sum(fit_lm$residuals^2)

sse_optim
sse_reg

We can finish by plotting the two regression lines over the data and show that they produce the same fit.

plot(df$H, 
     df$HR,
     col = "light grey",
     pch = 19,
     xlab = "Hits",
     ylab = "Home Runs")
title(main = "Predicting Home Runs from Hits",
     sub = "Red Line = Regression | Black Dashed Line = Optimizer")
abline(fit_lm, col = "red", lwd = 5)
abline(a = a_optim_weight,
       b = b_optim_weight,
       lty = 2,
       lwd = 3,
       col = "black")

Model Fit Metrics

The value parameter from our optimizer output returned the sum of squared error. What if we wanted to return a model fit metric, such as AIC or r-squared, so that we can compare several models later on?

Instead of using the sum of squared errors function, we can attempt to minimize AIC by writing our own AIC function.

aic_func <- function(df, a, b){
  
  # make predictions for HR
  predictions <- with(df, hr_from_h(H, a, b))
  
  # get model errors
  errors <- with(df, HR - predictions)
  
  # calculate AIC
  aic <- nrow(df)*(log(2*pi)+1+log((sum(errors^2)/nrow(df)))) + ((length(c(a, b))+1)*2)
  return(aic)
  
}

We can try out the new function on the fake data set we created above.

aic_func(fake_df,
             a = 3, 
             b = 1)

Now, let’s run the optimizer with the AIC function instead of the sum of square error function.

optimizer_results <- optim(par = c(0, 0),
                           fn = function(x){
                             aic_func(df, x[1], x[2])
                             }
                           )

optimizer_results

We get the same coefficients with the difference being that the value parameter now returns AIC. We can check that this AIC compares to our original linear model.

AIC(fit_lm)

If we instead wanted to obtain an r-squared, we can write an r-squared function. In the optimizer, since a higher r-squared is better, we need to indicate that we are wanting to maximize this value (optim defaluts to minimization). To do this we set the fnscale argument to -1. The only issue I have with this function is that it doesn’t return the coefficients properly in the par section of the results. Not sure what is going on here but if anyone has any ideas, please reach out. I am able to produce the exact result of r-squared from the linear model, however.

## r-squared function
r_sq_func <- function(df, a, b){
  
  # make predictions for HR
  predictions <- with(df, hr_from_h(H, a, b))
  
  # r-squared between predicted and actual HR values
  r2 <- cor(predictions, df$HR)^2
  return(r2)
  
}

## run optimizer
optimizer_results <- optim(par = c(0.1, 0.1),
                           fn = function(x){
                             r_sq_func(df, x[1], x[2])
                             },
      control = list(fnscale = -1)
 )

## get results
optimizer_results

## Compare results to what was obtained from our linear regression
summary(fit_lm)$r.squared

Wrapping up

That was a short tutorial on writing an optimizer in R. There is a lot going on with these types of functions and they can get pretty complicated very quickly. I find that starting with a simple example and building from there is always useful. We additionally looked at having the optimizer return us various model fit metrics.

If you notice any errors in the code, please reach out!

Access to the full code is available on my GitHub page.