Category Archives: Sports Science

Can I please be introduced to the Non-Applied Sport Scientist?

A recent discussion on Twitter spurred some thoughts that I had with respect to titles and roles in sport and in particular the title/role of Applied Sport Scientist.

@ScientistSport posed the following question:

It’s an interesting question to ponder. Given that sport science was originally born out of physiologists attempting to study human performance in Olympic sport athletes (which then eventually bled into team sport athletes) the question makes sense. Moreover, it seems like people generally think of sport science as something directed at helping the team “train better” – monitoring training loads, testing strength, power and conditioning, and even entering into the discussion of return to play following injury. Such a role has led many teams to employ an Applied Sport Scientist.

Titles in sport are weird. What does an Applied Sport Scientist do? What is the description of the role? More importantly, is there a Non-Applied Sport Scientist? If so, what are they doing?

Generally, when I’ve been introduced to the Applied Sport Scientist at a team when I’ve found is they are an assistant strength coach or assistant athletic trainer that has been tasked with turning on GPS units, conducting force plate jumps with the players, and coordinating the reports from the team’s Athlete Management System (AMS).

No doubt these are important tasks and critical to helping the staff plan and manage the team’s training! But, why is this a science role? What’s scientific about it? Is the individual ensuring data quality and integrity is being maintained before it is stored in the AMS? Is the individual conducting scientific inquiry of the data within the AMS to understand the measurements being made and determining if the measures are valid, reliable, or responsive? More importantly, how is the individual using the abundance of data being collected to answer larger questions that are relevant to the entire organization?

Perhaps the role shouldn’t be called Applied Sport Scientist? Maybe it should be Data Collection Coordinator or something more descriptive of the task at hand? Titles matter! They define what we do and how we do it. Again, if there is an Applied Sport Scientist is there a Non-Applied Sport Scientist? Maybe the latter is the one doing the real scientific work – identifying the pertinent research questions, planning applied science studies, structuring and establishing best practice data collection methods, analyzing data, and communicating the results to the end users.

What should the role of an Applied Sport Scientist be?

While some may feel like my argument is a bit pedantic here is why it matters.

The aim of the Applied Sport Scientist or the Sport Science Department should be to answer questions across the entire sporting organization. This shouldn’t simply be limited to matters of strength and conditioning. Rather, the goal should be to apply the scientific method to any and all questions in sport – training, return to play, performance evaluation, player acquisition, team tactics, etc. – and work at the intersection of such topics to provide analysis that helps the key stakeholders make decisions. A few colleagues and I wrote a paper about the parallels between Business Intelligence and Sports Science a few years ago <CLICK HERE>.

Science isn’t just a title; it is a framework and process for asking and answering questions. Or, as David Salsburg states, in his brilliant book The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century, “Science, we are often taught, is measurement. We make careful measurements and use them to find mathematical formulas that describe nature.” Consequently, someone that is given the title Applied Sport Scientist should actually have scientific training. The concept of framing a question, collecting data, doing basic statistics, knowing basic physiology and biomechanics, understanding how to run a simple reliability study, etc., are things that should be fundamental skills for this individual. Calling someone a Sport Scientist who doesn’t have these skills – even though they might be a really smart person and they might know a good deal about whatever technology they are using – is like calling me a strength coach. Sure, I can write a program and I can train and coach people. But, that’s probably not why you would hire me. Just as the strength coach can collect data and print reports, but you aren’t hiring them to conduct scientific investigations. You’re also not hiring the Physical Therapist to run the nutrition program.

Being smart and hardworking are important qualities in sport and everyone can help out in various areas of the organization. But titles should matter because they in some way define roles and responsibilities. The best organizations find the right people, with the right skill sets, to work together and create a super team.

As I like to say, Success boils down to four things:

  1. Knowing what you know.
  2. Working to be really good at what you know.
  3. Knowing what you don’t know.
  4. Knowing enough about what you don’t know to ask the right questions to get people in who can help you out with that thing.

 

 

Removing columns with NA for fluid table building in shiny

One of the most frustrating aspects of building {shiny} apps is dealing with columns that have NAs when outputting tables. This is common in sport when dealing with players from different position groups who may have different stats that describe performance for those positions. Rather than writing a long series of if/else statements, I prefer to streamline the process by dropping those columns prior to returning the table of data. Not only does this make the app run smoothly but it also is easier to debug or add additional table information without having to deal with a lot of nested if/else statements.

The full code is accessible on my GITHUB page.

Load Packages & Simulate Data

## Removing columns with NA for fluid table building in shiny

## packages ---------------------------------------------------
library(tidyverse)
library(shiny)

## simulate data ----------------------------------------------
d <- tribble(
  ~player, ~position, ~stat1, ~stat2, ~stat3,
  'Frank', 'Pitcher', 10, NA, 33,
  'Tom', 'Batter', NA, 14, 12,
  'Jeff', 'Batter', NA, 5, NA,
  'Harold', 'Pitcher/Batter', 12, 33, 9
)

d

We can see from our little data set that different players have different stats populated. We really don’t want our users to deal with having to see NA in the table output. So, we need to devise a way to drop the columns with NA’s once a specific player has been selected.

Dropping Columns with NA in Base R

Let’s select on player and attempt to drop their columns with NA. In base R we will use the colSums() function to produce a count of the number of NA’s in each column.

## remove columns with NA for Frank, using Base R --------------------------

frank <- d %>% 
  filter(player == 'Frank')

# colSums() can be used to count the NA's in each column
colSums(is.na(frank))

We can see that stat2 has 1 NA while the other 4 columns are complete. We can use this information to retain those four columns and drop stat2.

frank[ , colSums(is.na(frank)) == 0]

Dropping Columns with NA in {dplyr}

We can perform a similar task using the select_if() function within the {dplyr} package and indicating that we want to select all columns without an NA.

frank %>%
  select_if(~!is.na(.x))

Build a shiny app that fluidly retains the columns without NA

Now that we have a few strategies for removing columns with NA, we can build a {shiny} app that allows the user to select a player and then the server fluidly will drop the columns with NA so that we don’t need to use a messy if/else chain.

Notice that prior to dropping columns with NA I set the names of all of the columns in the table so that they look nicer when the table gets rendered. We can see from the figures that no matter which player is selected, the server intelligently drops columns with missing data, allowing the user to see only the statistics that are meaningful for the individual.

# UI
ui <- fluidPage(
  
  sidebarPanel(
    
    selectInput(inputId = 'player',
                label = "Select Player",
                choices = sort(unique(d$player)),
                selected = FALSE,
                multiple = FALSE)
    
  ),
  
  mainPanel(
    
    tableOutput(outputId = 'tbl')
  )
)

# Server
server <- function(input, output){
  
  # get selected player
  dat_tbl <- reactive({ d %>%
      filter(player == input$player)
    
  })
  
  # build table
  output$tbl <- renderTable({ dat_tbl() %>%
      setNames(c("Player", "Position", "Stat 1", "Stat 2", "Stat 3")) %>%
      select_if(~!is.na(.x))
    
  })
  
}


# deploy
shinyApp(ui, server)

Wrapping Up

Instead of having users see columns with NA, make your renderTable() function fluid and automatically drop columns with missing values to improve the user experience.

The full code is accessible on my GITHUB page.

Loop function to save multiple plots as SVG files

I’ve discussed using loops for a number of statistical tasks (simulation, optimization, Gibbs sampling) as well as data processing tasks, such as writing data outputs to separate excel tabs within one excel file and creating a multiple page PDF with a plot on each page.

Today, I want to expand the loop function to produce separate SVG file plots and have R save those directly to a folder stored on my computer. The goal here is to have the separate plots in one place so that I can upload those files directly to a web app and allow them to be viewable for a decision-maker.

NOTE: You can save these files in other formats (e.g., jpeg, png). I chose SVG because it was the primary file type I had been working with.

Data

To keep the example simple, we will be using the {mtcars} data set, which is freely available in R. I’m going to set the cylinder (cyl) variable to be a factor as that is the variable that we will build our separate plot files for. In the sport setting, you can think of this as player names or player IDs, where you are building a plot for each individual, looping over them and producing a separate plot file.

library(tidyverse)
library(patchwork)

theme_set(theme_bw())

## data
dat <- mtcars %>%
  mutate(cyl = as.factor(cyl))

 

Example Plots

Here is an example of the three types of plots we will build. We will wrap the three plots together using the {patchwork} package. The below plot is using all of the data but our goal will be to produce a loop function that creates the same plot layout using data for each of the three cylinder types.

p1 <- dat %>%
  ggplot(aes(x = drat, y = hp)) +
  geom_point(size = 5) +
  geom_smooth(method = "lm",
              se = FALSE) +
  ggtitle("hp ~ drat")

p2 <- dat %>%
  count(carb) %>%
  mutate(carb = as.factor(carb)) %>%
  ggplot(aes(x = n, y = reorder(carb, n))) +
  geom_col() +
  labs(x = "Count",
       y = "Carb",
       title = "Carb Count")

p3 <- dat %>%
  ggplot(aes(x = wt)) +
  geom_histogram(fill = "light grey",
                 color = "black",
                 bins = 5) +
  ggtitle("Engine Weight")


(p2 | p3) / p1

Creating the loop for plotting

First, we create a function that produces the plots above. Basically, I’m taking the plotting code from above and wrapping it in a function. The function takes in input, i, and runs through the three plots for that input, at the end using the ggsave() function to save each plot to the dedicated file path.

 

# create a plot function for each cyl
plt_func <- function(i){
  p1 <- i %>%
    ggplot(aes(x = drat, y = hp)) +
    geom_point(size = 5) +
    geom_smooth(method = "lm",
                se = FALSE) +
    ggtitle("hp ~ drat")
  
  p2 <- i %>%
    count(carb) %>%
    mutate(carb = as.factor(carb)) %>%
    ggplot(aes(x = n, y = reorder(carb, n))) +
    geom_col() +
    labs(x = "Count",
         y = "Carb",
         title = "Carb Count")
  
  p3 <- i %>%
    ggplot(aes(x = wt)) +
    geom_histogram(fill = "light grey",
                   color = "black",
                   bins = 5) +
    ggtitle("Engine Weight")
  
  three_plt <- (p2 | p3) / p1
  
  
  ggsave(three_plt, file = paste0(unique(i$cyl), ".svg"))
}

Then, we use the split() function to split the data frame into a named list with each cylinder type being it’s own list that contains a data frame. The map() function then creates the loop over that list and for each element of the list (for each cylinder type) it runs our plot function above and saves the results. Notice that I’ve specified setwd() to indicate where I want the files to be saved to. If you are saving thousands of files at once and you don’t specify this and have your working directory defaulted to your desktop, it becomes a mess pretty quick (trust me!).

# setwd("name of the file path where you want to save the files goes here")
dat %>% 
  split(.$cyl) %>% 
  map(plt_func)

Once you’ve run the loop, your R output should look like this, where we see that each list element (cylinder) is being saved as an SVG file.

Our folder has the plot outputs:

If I click on any one of the SVG files I get the desired plot.

The above is for a 4 cylinder vehicle. Notice that I didn’t specify this at the top of the plot because my initial assumption was that I would be uploading the individual SVG files to a web application where there is a webpage dedicated to each cylinder type. Therefore, naming the plots by cylinder type would be redundant. However, if you want to add a plot to the {pathwork} layout about, you can use the plot_annotation() function.

(p2 | p3) / p1 + plot_annotation(title = "Engine cylinders")

We can add the plot_annotation() function to the loop but instead of a generic title, like above, we will need to create a bespoke title within the loop that stores each cylinder type. To do this, we use the paste() function to add the cylinder number in front of the word “cylinder” in our plot title name.

plt_func <- function(i){
  p1 <- i %>%
    ggplot(aes(x = drat, y = hp)) +
    geom_point(size = 5) +
    geom_smooth(method = "lm",
                se = FALSE) +
    ggtitle("hp ~ drat")
  
  p2 <- i %>%
    count(carb) %>%
    mutate(carb = as.factor(carb)) %>%
    ggplot(aes(x = n, y = reorder(carb, n))) +
    geom_col() +
    labs(x = "Count",
         y = "Carb",
         title = "Carb Count")
  
  p3 <- i %>%
    ggplot(aes(x = wt)) +
    geom_histogram(fill = "light grey",
                   color = "black",
                   bins = 5) +
    ggtitle("Engine Weight")
  
  cyl_name <- i %>% 
    select(cyl) %>%
    distinct(cyl) %>%
    pull(cyl)
  
  three_plt <- (p2 | p3) / p1 + plot_annotation(title = paste(cyl_name, "cylinder", sep = " ")) ggsave(three_plt, file = paste0(unique(i$cyl), ".svg")) } # setwd("name of the file path where you want to save the files goes here") dat %>% 
  split(.$cyl) %>% 
  map(plt_func)

Now we have plots with named titles.

Wrapping Up

During those times where you need to produce several individual plots, rather than doing them one-by-one, leverage R’s loop functions to rapidly produce multiple plots in one shot.

The full code is accessible on my GITHUB page.

Catapult GPS – Converting the practice duration string to minutes

One of the most frustrating things to deal with is date and time strings. Using Catapult GPS, a popular GPS provider for professional and collegiate sports teams, practice duration is reported in their export as a string, hours : minutes : seconds. Unfortunately, we can’t do much with this if we want to perform additional computations, for example calculate player load per minute, we need to convert this column into total minutes.

I’ve had a few people in the sports performance field reach out and ask how to do this in R because they often get frustrated and just resort to changing the data in their CSV download prior to importing it into R, where they then do their plotting and visualizing. Today, I’ll walk through a few steps using the {lubridate} package and show you how you can handle this data cleaning all within you R environment.

Load Packages & Get Data

We start by loading {tidyverse} and {lubridate} and some fake Catpault data that I’ve created.

### Packages ---------------------------------------
library(tidyverse)
library(lubridate)

### Load Data -------------------------------------
catapult <- read.csv("catapult_example.csv", header = TRUE) %>%
janitor::clean_names()

catapult

Adjusting time

We can see the duration string (hour : minute : second) indicating that the session was 97 minutes and 10 seconds long. Before handling the entire column of data, let’s just grab a single observation and work through the functions we need so that we know what is going on.

### Adjust Time ------------------------------------
# hms() function to split out duration to its component parts into a string
single_time <- catapult %>% 
  slice(1) %>% 
  pull(duration)

single_time

The hms() function can be used to convert each of the time components into a named string.

single_time2 <- hms(single_time)
single_time2

Once we have the individual components in a named string we can extract them out with the hour(), minute(), and second() functions and have each returned back as an integer.

# Select each component 
hour(single_time2)
minute(single_time2)
second(single_time2)

Once in integer form, converting this data to a total minutes value we first multiplying hour by 60 and divide second by 60 and then sum those up with minutes.

hour(single_time2)*60 + minute(single_time2) + second(single_time2)/60


The finished product suggests the session was 97.2 minutes long.

Applying the approach to all of our data

Now that we understand what is going on under the hood, we can apply this at scale, to our of our data.

catapult <- catapult %>%
  mutate(hour_min_sec = hms(duration),
    pract_time = hour(hour_min_sec) * 60 + minute(hour_min_sec) + second(hour_min_sec) / 60)

catapult

After getting practice time into minutes we will adjust the date column from a character string to an actual date, using the as.Date() function.

catapult$date <- as.Date(catapult$date, "%m/%d/%y")
catapult

To finish, we will do a bit of clean up and remove the duration and hour_min_sec columns, round the player_load and pract_time columns to one significant digit and create a player_load_per_min column.

catapult %>%
  select(-duration, -hour_min_sec) %>%
  mutate(across(.cols = player_load:pract_time,
                ~round(.x, 1)),
         player_load_per_min = round(player_load / pract_time, 2))

Now we have a cleaned data set that we can worth with!

Access to the full code is available on my GITHUB page.

R {shiny} app with PDF save report capabilities

Over the previous several articles I’ve shared different approaches to sharing and communicating athlete data. Over this time I got a question about {shiny} apps and if I had a way to easily build in capabilities to save the report as a PDF for those times when you want to save the report as a PDF to email out or print the report and take it to a decision-maker.

Today I’ll go over two of the easiest ways I can think of to add some PDF save functionality to your {shiny} app. Before we jump in, if you are looking to just get started with {shiny} apps, aside from searching my blog for the various apps I’ve built (there are several!), Ellis Hughes and I did a 4 part series on building a {shiny} app from scratch:

Alright, now to jump into building a {shiny} app with the ability to save as PDF. As always, you can access the full code to the article on my GITHUB page.

Loading Packages & Data

As always, we need to load the packages that we need and some data. For this, I’ll keep things simple and just use the mtcars data that is available in base R, since I’m mainly concerned with showing how to build the app, not the actual data analysis.

#### packages ----------------------------------------------
library(shiny)
library(shinyscreenshot)
library(DT)
library(gridExtra)
library(ggpubr)
library(tidyverse)

## data ----------------------------------------------------
dat <- mtcars %>%
  mutate(cyl = as.factor(cyl),
         car_type = rownames(.)) %>%
  relocate(car_type, .before = mpg)

 

App 1: Printing the app output as its own report

The user interface for this app will allow the user to select a Cylinder (cyl) number and the two plots and table will update with the available info.

The server of this app is where the magic happens. What the user sees on the web app is not exactly what it looks like when saved as a PDF. To make this version work, I need to store my outputs in their own elements and then take those elements and output them as an export. I do this by saving a copy within the render function for each of the outputs. I also create an empty reactive values element within the server, which sets each plot and table to NULL, but serves as a container to store the output each time the user changes the cylinder number.

You’ll notice in the output$tbl section of the server, I produce one table for viewing within the app while the second table is stored for PDF purposes. I do this because I like the ggtextable() table better than the simple base R one, as it has more customizable options. Thus, I use that one for the PDF report. Here is what the server looks like:

server <- function(input, output){
  
  ## filter cylinder
  cyl_df <- reactive({
    
    req(input$cyl)
    
    d <- dat %>%
      filter(cyl == input$cyl)
    d
    
  })
  
  
  ## output plt1
  output$plt1 <- renderPlot({
    
    vals$plt1 <- cyl_df() %>%
      ggplot(aes(x = wt, y = mpg)) +
      geom_point(size = 4) +
      theme_bw() +
      labs(x = "wt",
           y = "mpg",
           title = "mpg ~ wt") +
    theme(axis.text = element_text(size = 12, face = "bold"),
          axis.title = element_text(size = 15, face = "bold"),
          plot.title = element_text(size = 20))
    
    vals$plt1
    
    
  })
  
  ## output table
  output$tbl <- renderTable({
    
    tbl_df <- cyl_df() %>%
      setNames(c("Car Type", "MPG", "CYL", "DISP", "HP", "DRAT", "WT", "QSEC", "VS", "AM", "GEAR", "CARB"))
    
    # store table for printing
    vals$tbl <- ggtexttable(tbl_df,
                            rows = NULL,
                            cols = c("Car Type", "MPG", "CYL", "DISP", "HP", "DRAT", "WT", "QSEC", "VS", "AM", "GEAR", "CARB"),
                            theme = ttheme('minimal',
                                           base_size = 12))
    
    # return table for viewing
    tbl_df
    
  })
  
  
  ## output plt2
  output$plt2 <- renderPlot({
    
    vals$plt2 <- cyl_df() %>%
      ggplot(aes(x = disp, y = hp)) +
      geom_point(size = 4) +
      theme_bw() +
      labs(x = "disp",
           y = "hp",
           title = "hp ~ disp") +
      theme(axis.text = element_text(size = 12, face = "bold"),
            axis.title = element_text(size = 15, face = "bold"),
            plot.title = element_text(size = 20))
    
    vals$plt2
    
  })
  
  
  ## The element vals will store all plots and tables
  vals <- reactiveValues(plt1=NULL,
                         plt2=NULL,
                         tbl=NULL)
  
  
  ## clicking on the export button will generate a pdf file 
  ## containing all stored plots and tables
  output$export = downloadHandler(
    filename = function() {"plots.pdf"},
    content = function(file) {
      pdf(file, onefile = TRUE, width = 15, height = 9)
      grid.arrange(vals$plt1,
                   vals$tbl,
                   vals$plt2,
                   nrow = 2,
                   ncol = 2)
      
      dev.off()
    })
}

 

Here is what the shiny app will look like when you run it:

When the user clicks the Download button on the upper left, they can save a PDF, which looks like this:

Notice that we are returned the plots and table from the {shiny} app, however we don’t have the overall title. I’m sure we could remedy this within the server, but what if we want to simply produce a PDF that looks exactly like what we see in the web app?

App 2: Take a screen shot of your shiny app!

If we want to have the downloadable output look exactly like the web app, we can use the package {shinyscreentshot}.

The user interface of the app will remain the same. The server will change as you no longer need to store the plots. You simply need to add an observeEvent() function and tell R that you want to take a screenshot of the page once the button is pressed!

Since we are taking a screen shot I also took the liberty of changing the table of data to a {DT} table. I like {DT} tables better because they are interactive and have more functionality. In the previous {shiny} app it was harder to use that sort of interactive table and store it for PDF printing. Since we are taking a screenshot, it opens up a lot more options for us to customize the output.

Here is what the server looks likes:

server <- function(input, output){
  
  ## filter cylinder
  cyl_df <- reactive({
    
    req(input$cyl)
    
    d <- dat %>%
      filter(cyl == input$cyl)
    d
    
  })
  
  
  ## output plt1
  output$plt1 <- renderPlot({ cyl_df() %>%
      ggplot(aes(x = wt, y = mpg)) +
      geom_point(size = 4) +
      theme_bw() +
      labs(x = "wt",
           y = "mpg",
           title = "mpg ~ wt") +
    theme(axis.text = element_text(size = 12, face = "bold"),
          axis.title = element_text(size = 15, face = "bold"),
          plot.title = element_text(size = 20))
    
  })
  
  ## output table
  output$tbl <- renderDT({ cyl_df() %>%
      datatable(class = 'cell-border stripe',
                rownames = FALSE,
                filter = "top",
                options = list(pageLength = 4),
                colnames = c("Car Type", "MPG", "CYL", "DISP", "HP", "DRAT", "WT", "QSEC", "VS", "AM", "GEAR", "CARB"))
    
  })
  
  ## output plt2
  output$plt2 <- renderPlot({ cyl_df() %>%
      ggplot(aes(x = disp, y = hp)) +
      geom_point(size = 4) +
      theme_bw() +
      labs(x = "disp",
           y = "hp",
           title = "hp ~ disp") +
    theme(axis.text = element_text(size = 12, face = "bold"),
          axis.title = element_text(size = 15, face = "bold"),
          plot.title = element_text(size = 20))
    
    
  })
  
  observeEvent(input$go, {
    screenshot()
  })
}

The new web app looks like this:

Looks pretty similar, just with a nicer table. If the user clicks the Screenshot Report at the upper left, R will save a png file of the report, which looks like this:

As you can see, this produces a downloadable report that is exactly like what the user sees on their screen.

Wrapping Up

There are two simple ways to build some save functions directly into your {shiny} apps. Again, if you’d like the full code, you can access it on my GITHUB page.