# R Tips & Tricks: Force-Velocity-Power Profile Graphs in R Shiny

A colleague of mine was asking about how he could produce plots of force-velocity-power profiles for his coaches based on their Gymaware testing. Rather than plotting static plots for this stuff it is sometimes easier to build out a nice shiny app so that the coaches can interact with it or the practitioner can quickly change between players or position groups when giving a presentation to the staff.

All code and data is available at my GITHUB page (R Script and Data).

This tutorial will cover:

1) Building polynomial regression to represent the average team trend
2) Iterative approaches to building static plots
3) Iterative approaches to building Shiny web apps

This tutorial has a number of different iterations of plots and web apps so working through the R-code on your own is advised so you can see how every step is performed.

Data

After loading the two required packages, {tidyverse} and {shiny}, we load in the data set and we see that it consists of Force, Power, and Velocity values across 5 different external loads for 3 different athletes:

If you want to use the R-script to produce reports for yourself going forward just ensure that your data has the above columns (named the same way, since R is case-sensitive). If you have data with different column names, your two choices are: (1) change my code to match the column names of your data; or, (2) once you pull the data into R change the columns names in your code to match mine.

Average Trend Line (2nd Order Polynomial)

Eventually, we are going to want to compare our athletes to the team average (or position group average if your sport is more heterogeneous). This type of data is often modeled as a. 2nd order polynomial regression. Thus, we will build this type of regression to predict both Velocity and Power from Force. Once I have these two regressions built I can create a data frame that consists of a sequence of Force values (from the minimum to the maximum observed in the team’s data) and predict the velocity and power at each unit of force.

```fit_velo <- lm(Velocity ~ poly(Force, 2), data = fv_profile)
fit_power <- lm(Power ~ poly(Force, 2), data = fv_profile)

avg_tbl <- data.frame(Force = seq(from = min(fv_profile\$Force),
to = max(fv_profile\$Force),
by = 0.5))

avg_tbl\$Velocity_Avg <- predict(fit_velo, newdata = avg_tbl)
avg_tbl\$Power_Avg <- predict(fit_power, newdata = avg_tbl)
colnames(avg_tbl)[1] <- "Force_Grp"
```

Static Plots

I’ll walk through a few iterations of static plots. See the GITHUB code to walk through how these were produced.

1) All athletes with an average team trend line

This plot gives us a sense for the trend in Velocity as it relates to increasing amounts of force. Below you will see one with a solid trend line and one with a dashed trend line. Feel free to use which ever one you prefer.

2) Team trend for Velocity and Power

The common way this type of data is visualized in the sports science literature it is a bit tricky in R because it requires a dual y-axis. To obtain this, within my ggplot call I shrink Power down to a scale more similar to Velocity (the main y-axis) by dividing it by 1000. Then when I call the second y-axis with the sec_axis() function I multiply Power by 1000 to put it back on its normal scale.

3) Accounting for individuals

The above plots are a look at the entire team (in this case only 3 athletes). However, we may want to look at the individuals more explicitly. As such, we build four plots to show individual differences:

1) All athletes all on the same plot with their corresponding individualized trend lines (NOTE: if you have a lot of athletes, this plot can get pretty busy and ultimately become useless).
2) Plot each individual as a facet to look at the athletes separately.
3) Create the same plot as #2 but add in the group average trend line (which we created using our 2nd order polynomial regression) to allow us to compare each athlete to the groupx.
4) Plot each individual as a facet with velocity and power on separate y-axes.

Shiny App Development

The still figures above are nice but making things interactive is much more useful. I have four {shiny} web iterations that we can go through. Again, using the R code while reading this will help you understand what is going on under the hood. Additionally, you’ll want to run these in R so that R can open up the webpage on your computer and allow you to play with the app. Below is just still shots of each app iteration.

1) Version 1: Independent Player Plots

This version allows you to select one player at a time.

2) Version 2: Add Players to Facets

This version lets you select however many players you want and it will add them in as facets. This is a useful app if you are presenting to the staff and want to select or de-select several players to show how they compare to each other.

3) Version 3: Same as Version 2 but add Power to the Plot on a second y-axis

4) Version 4: Combine all plot details

Version 4 combines everything from this tutorial in one single web app. You can select the  force-velocity-power profile for individual athletes (added to the plot as facets) and see the team average trend line (added to the plot as a dashed line) for velocity and power, to allow you to make comparisons between each player and to the group average.

Conclusion

{tidyverse} makes it incredibly easy to manipulate data and quickly iterate plots to our liking while {shiny} offers an easy way for us to turn our plots into an interactive webpage.

Again, for the code and data see my GITHUB page (R Script and Data).

# R Tips & Tricks: Pearson Correlation from First Principles

In our most recent TidyX screen cast we talked a bit about correlation, which compelled me to expand on Pearson’s correlation to show how you can do things from first principles in R, by writing your own custom functions. This tutorial covers:

• A brief explanation of Pearson’s correlation
• R functions available to obtain Pearson’s correlation coefficient
• How to write your own custom functions to understand the underlying principles of Pearson’s correlation coefficient and it’s confidence intervals

Pearson’s Correlation

Read almost any scientific journal article and you are sure to encounter Pearson’s correlation coefficient (denoted as r), as it is a commonly used metric for describing relationships within ones data. Take any basic stats course and you are bound to hear the phrase, “Correlation does not equal causation”, due to the fact that just because things are correlated does not mean that one definitely caused the other (such a relationship would need to be teased out of more specifically and the natural world is full of all sorts of random correlations).

Pearson’s correlation is a descriptive statistic used to quantify the linear relationship between two variables. It does so by measuring how much two variables covary with each other. The correlation coefficient is scaled between -1 and 1 where 1 means that the two variables have a perfect positive relationship (as one variable increases the other variable also always increases) while -1 means that the two variables have a perfect negative relationship (as one variable decreases the other variable also always decreases). A correlation coefficient of 0 suggests that the two variables do not share a linear relationship. It is rare that we see completely perfect correlations (either 1 or -1) and often our data will present with some scatter suggesting a specific trend (positive or negative) but with some amount of variability.

Data Preparation

R offers a few convenient functions for obtaining the correlation coefficient between two variables. For this tutorial, we will work with the Lahman baseball data set, which is freely available in R once you have installed the {Lahman} package. We will also use the {tidyverse} package for data manipulation and visualization.

We will use the Batting data set, which contains historic batting statistics for players, and the Master data set, which contains meta data on the players (e.g., birth year, death year, hometown, Major League debut, etc).

I do a little bit of data cleaning to obtain only the batting statistics from the 2016 season for those that had at least 200 at bats and I join that data with the player’s birth year from the Master data set. The two hitting variables I’ll use for this tutorial are Hits and RBI. One final thing I do is create a quantile bin for the player’s age so that I can look at correlation across age groups later in the tutorial.

Here is what the data looks like so far:

Data Visualization

Once we have pre-processed the data we can visualize it to see how things look. We will plot two visuals: (1) the number of players per age group; and, (2), a plot of the linear relationship between Hits and RBI.

Correlation in R

R offers a few convenient functions for obtaining the correlation coefficient between two variables. The two we will use are cor() and cor.test(). The only arguments you need to pass to these two functions is the X and Y variables you are interested in. The first function will produce a single correlation coefficient while the latter function will produce the correlation coefficient along with confidence intervals and information about the hypothesis test. Here is what their respective outputs looks like for the correlation between Hits and RBI:

Pretty easy! We can see that the correlation coefficient is the same between both functions, as it should be (r = 0.82). In the bottom output we can see that the 95% Confidence Intervals range from r = 0.78 to r = 0.85. The correlation is high between these two variables but it is not perfect, which we can better appreciate from the scatter plot above showing variability around the regression line.

Correlation from First Principles

One of the best ways to understand what is going on behind the custom functions in your stats program (no matter if it is R, Python, SPSS, SAS, or even Excel) is to try and build your own functions by hand. Doing so gives you an appreciation for some of the inner workings of these statistics and will help you better understand more complex statistics latter on.

I’ll build two functions:

1.  A function to calculate the Pearson’s correlation coefficient
2. A function to calculate the confidence intervals around the correlation coefficient

Pearson’s Correlation Coefficient Function

• Similar to the built in R functions, this function will take inputs of an X and Y variable.
• You can see the math for calculating the correlation coefficient in the function below. We start by subtracting the mean for each column from each observation. We then multiply the differences for each row and then produce a column of squared differences for each variable. Those values provide the inputs for the correlation coefficient in the second to last line of the function.
```cor_function <- function(x, y){

dat <- data.frame(x, y)
dat <- dat %>%
mutate(diff_x = x - mean(x),
diff_y = y - mean(y),
xy = diff_x * diff_y,
x2 = diff_x^2,
y2 = diff_y^2)

r <- sum(dat\$xy) / sqrt(sum(dat\$x2) * sum(dat\$y2))
return(r)
}

```

Confidence Interval for Pearson’s R

• This function takes three inputs: (1) the correlation coefficient between two variables (calculated above); (2) the sample size (the number of observations between X and Y); and, (3) The Confidence Level of Interest.
• NOTE: To input a confidence level of interest I only set it up for three options, 0.9, 0.95, or 0.99, for the 90%, 95% and 99% Confidence Interval respectively. I could set it up to take any level of confidence and calculate the appropriate critical value but the function (as you can see) was already getting long and messy so I decided to cut it off and keep it simple for illustration purposes here.
• This function is a bit more involved than the previous one and that’s because we require some transformations of the data. We have to transform the correlation coefficient using Fisher’s Z-Transformation in order to create a normal distribution. From there we can calculate our confidence intervals and then back transform the values so that they are on the scale of r.
```cor.CI <- function(r, N, CI){

fisher.Z <- .5*log((1+r)/(1-r))

se.Z <- sqrt(1/(N-3))

if(CI == .9){
MOE <- 1.65*se.Z}
else {
if(CI == .95){
MOE <- 1.95*se.Z}
else {
if(CI ==.99){
MOE <- 2.58*se.Z}
else{
NA
}
}
}

Lower.Z <- fisher.Z - MOE
Upper.Z <- fisher.Z + MOE
Lower.cor.CI <- (exp(2*Lower.Z)-1)/(exp(2*Lower.Z)+1)
Upper.cor.CI <- (exp(2*Upper.Z)-1)/(exp(2*Upper.Z)+1)
Correlation.Coefficient.CI <- data.frame(r, Lower.cor.CI, Upper.cor.CI)
Correlation.Coefficient.CI <- round(Correlation.Coefficient.CI, 3)
return(Correlation.Coefficient.CI)
}

```

Seeing our Functions in Action

We’ve obtained similar results to what was produced from the custom R functions!

We can also look at correlation across age bins. To do this, we will use {tidyverse} so that we can group_by() the age bins that we predefined.

First with the built in R functions

• Using the built in cor.test() function we need to call the confident interval specifically from the output and specifying [1] and [2] tells R that we want the first value (lower confidence interval) and the second value (higher confidence interval), respectively.
```yr2016 %>%
group_by(AgeBin) %>%
summarize(COR = cor(H, RBI),
COR_Low_CI = cor.test(H, RBI)\$conf.int[1],
COR_High_CI = cor.test(H, RBI)\$conf.int[2])
```

Now with our custom R functions

• Similar to above, we need to extract the confidence interval out of the confidence interval function’s output. In this case, if you recall, there were three outputs (correlation coefficient, Low CI, and High CI). As such, we want to indicate [2] and [3] for the lower and upper confidence interval, respectively, since that is where they are located in the model output.
```yr2016 %>%
group_by(AgeBin) %>%
summarize(COR = cor_function(H, RBI),
cor.CI(r = COR,
N = n(),
CI = 0.95)[2],
cor.CI(r = COR,
N = n(),
CI = 0.95)[3])
```

Conclusion

Hopefully this tutorial was useful in helping you to understand Pearson’s correlation and how easy it is to write functions in R that allow us to explore our data from first principles. All of the code for this tutorial is available on my GitHub page.

# R Tips & Tricks: Summarizing & Visualizing Data

Summarizing and visualizing your data is a critical first step of analysis. For new PhD students and those new to R, I’ve put together a few of the common approaches one might take when dealing with data. This tutorial covers:

• Manipulating/reshaping a data frame (from wide format to long format).
• Basic functions for obtaining summary statistics in your data .
• Visualization strategies such as boxplots, histograms, violin plots, joint plots, and plots showing inter-individual differences.

Instead of putting all of the code into this blog post I’ll just highlight some of the key pieces along the way. If you want all of the code, simply head over to my GitHub page.

Data

In this tutorial, we will simulate data where twenty participants are randomized  to either a traditional (IE, linear) or block periodization program for 16-weeks. The participants had their squat tested pre and post training program intervention and the difference between the two tests is our outcome of interest. (NOTE: I drew random values for pre- and post-squat for both groups in this example, so it isn’t completely life like, where the post-test would normally be related in some way to the pre-test. In future simulations that discuss analysis I will use more realistic outcomes).

Here is a snip of what the simulation code and first few rows of the data look like:

Packages

This analysis will use three R packages (make sure you have installed them prior to running the code):

1. {tidyverse} – Used for data manipulation and visualization
2. {gridExtra} – Used for organizing the plot grid when creating our joint plot
3. {psych} – Used to produce simple summary statistics

Manipulating Data

Occasionaly, you may need to manipulate your data from a wide format to a long format, or vice versa, as some types of analysis or visualizations are made easier when the data is in a specific format.

In our case, we have the data in a wide format. In this format, we have one row per participant and the 2 columns of interest are the pre- and post-squat columns (see above example of the first 6 rows). This can be accomplished with the pivot_longer() function in {tidyverse}. This function takes four primary arguments:

1. The data frame where our wide format data is.
2. The columns of interest from our wide data frame that we want to pivot into a single column. In this case, “pre_squat” and “post_squat” columns are the ones we want to pivot.
3. the names_to argument is where we specify the name of the column where the pre_squat and post_squat columns will be pivoted into.
4. The values_to column is where we specify the name of the column that we want the values of our participants pre_squat and post_squat values to reside under in the long data frame.

The code looks like this:

```dat_long <- pivot_longer(data = dat,
cols = c("pre_squat", "post_squat"),
names_to = "test",
values_to = "performance")
```

The first few rows of the long format data frame look like this:

Notice that now each participant has two rows of data, one for their pre-squat and one for their post-squat.

If we have stored our data in a long format and we need to go back to a wide format, we can simply use pivot_wider() from {tidyverse}. This function takes three primary arguments:

1. The data frame where our long format data is.
2. The names_from argument, which specifies the column in the long data frame that has the variables we’d like to pivot out into their own columns in a wide format (in this case, the “test” column contains the information about whether the test performed was pre or post, so we want to pivot that out to two new columns).
3. The values_from argument specifies which values we want to fill under the corresponding columns we are pivoting. In this case, the ‘performance’ column has the data specific to the pre- and post-squat variables that we are pivoting to their own columns.

The code looks like this:

```dat_wide <- pivot_wider(data = dat_long,
names_from = test,
values_from = performance)
```

The first few rows of the new wide data frame looks like this:

Notice that the data looks exactly like our initial data frame (as you would expect given that original data was already a wide data frame) with one row per participant.

Producing Summary Statistics

Now that we’ve walked through some simple approaches to manipulating data frames, I’ll detail a few easy ways of summarizing your data.

The {psych} package has two very simple functions, describe() and describeBy(), which come in handy when you need summary statistics, info about the range of the data, the standard error of the data, and some metrics about the distribution of your data.

describe() works with all of the data in a column. All you need to do is specify the data frame and the column of interest (separate those two by a \$ sign). In this case, we will use the long data frame and produce summary statistics for the performance of pre-and post-squat performance across all subjects.

```describe(dat_long\$performance)
```

describeBy()
functions similar to describe(); however, it allows for a second argument identifying the group that you’d like to produce summary statistics for. In this case, we will produce summary statistics for all pre- and post-squat performance data described by group (traditional or block periodization).

```describeBy(dat_long\$performance, dat_long\$group)
```

What if we want to look at summary statistics by group but further group by pre- and post-squat performance? In this instance, we can code our own summary statistics using {tidyverse}. {tidyverse} is one of the best packages for data manipulation, data clean up, and data visualization as it contains a host of other packages that contain functions for these tasks.

In the below code I start with the data frame that has the data I want to summarize and I ‘pipe’ together different functions using the %>% command. This allows me to iterate very quickly on a data set and intuitively build analysis as I go. In this example I first call the mutate() function (used to add a new column to my already existing data set) and inside of it I re-level my factors (since R automatically stores them in alphabetical order) as I want my results returned with the pre-squat performance first followed by the post-squat performance. After that step, I indicate that I want to group_by() both my periodization groups (traditional and block) and my test (pre and post). Finally, I use the summarize() function to create a summary data frame where I am specifying N = n(), in order to get the sample size counted in each group of my group_by(), as well as the mean and the standard deviation for each of the group_by() groups.

```dat_long %>%
mutate(test = fct_relevel(test, levels = c("pre_squat", "post_squat"))) %>%
group_by(group, test) %>%
summarize(N = n(),
Mean = mean(performance),
SD = sd(performance))
```

NOTE: In my R script on GItHub I also create a “difference” column (post – pre) and perform the same descriptive operations on that column as above. We will use this column below to produce quantiles as well as for our data visualization purposes but feel free to work through the examples in the R code for using the above functions.

Producing quantiles is easy with the quantile() function from base R. Similar to describe() above, pass the function the data and the column of interest, separated by a \$ sign, to get your result. Here we will get the quantiles of the differences between post and pre-squat strength for all participants.

```quantile(dat\$Diff)
```

If we’d like to see these differences by group, we can use the by() function, which allows us to pass a function (in this case quantile()) to an entire data frame. As such, we will get the quantiles for the difference in performance by group.

```by(dat\$Diff, dat\$group, quantile)
```

Note: In the R script on my GitHub page I walk through how to create a sequence of numbers (form 0 to 1) and specify a broader range of quantiles to be returned, should you need them.

Data Visualization

Below are a few ways that we can visualize the between group differences (the code for these is located at GitHub). All of the coding was done using {tidyverse}.

Boxplots

Boxplots are for showing the quantiles of your data but can lack context. The box represents the interquartile range (25-75) and the black line within the box represents the median value. All of these values as well as the smallest and largest values were obtained in the quantile() function in the previous section. We can see that the median value of the difference in squat performance for the block periodization group is higher than that of the traditional periodization group.

Overlapping Histograms

Overlapping histograms are useful for showing distributions and the difference between two groups. The dotted red line is set at a difference of 0 to represent no change from pre- to post-squat test performance. The alpha argument is set below 1 inside the geom_histogram() function of the code allow some opaqueness of the histograms. This way, we can see both distributions clearly. Similar to the boxplots, we can see that the median value of the difference in squat performance for the block periodization group is higher than that of the traditional periodization group.

Violin Plots with Points

Violin plots are a nice balance between boxplots and histograms as they are essentially two histograms mirroring each other. As such, you get tan appreciation of the data distribution, as you would with a histogram, while visualizing the groups side-by-side, as you would with a boxplot. I kept all of the data ponts on the plot as well, to allow the reader to see each participants performance and I placed a thick black point in the middle of the violin to represent the group median.

Joint Plot

Joint plots are a nice way to visualize continuous data where you have an ‘x’ and ‘y’ variable and want to additionally reflect the distribution of each variable with a histogram in the margins. In this plot, I placed the pre-squat performance on the x-axis and the post-squat performance on the y-axis and colored the points by which group the participant was in. This plot was constructed in {tidyverse} but requires the {gridExtra} package to arrange the histograms for the x and y variables in the margin. Additionally, you’ll want to create an empty plot to take up space on the grid so that everything lines up. This is all detailed in the code.

Inter-individual Differences

Finally, as we can see from all of the above plots, while the average performance in the block periodization group was greater than that of the traditional periodization group, there are large distributions in both groups indicating that some participants improved, others did not, and some stayed the same. We can visualize these inter-individual responses by creating a visaluzation that exposes each participant’s performance on both tests.

Conclusion

Hopefully this post serverse as a jumping off point for those looking to get started with analyzing data in R. As I stated at the start, all data can be obtained at my GitHub page.

# R Tips & Tricks: Dealing With Missing Values

A colleague contacted me last week asking about how they could fill in missing values in their data frame with 0’s. Let’s face it, missing values are an unfortunate reality when working with data and they can be a real pain in the butt. There have been a number of papers that discuss ways of handing missing values (e.g., various imputation approaches) and it is not my intention to address those here as the approach you choose to use would depend on the data and what you are trying to do.

My goal with this installment of R Tips & Tricks is to provide you with a way to remove NA’s, NaN’s, and Inf’s from your data set or to replace them in with some specified value (e.g., the column median or 0). The latter approach can also be used if you wanted to impute values, as suggested above (e.g, use a regression equation to forecast what the value would be given other data, used as independent variables in the model, in your data set).

Create Some Data

First we need to load the tidyverse package and create some data.

```## Load tidyverse
library(tidyverse)

## Create fake data
df <- tibble(
var1 = c(4, 2, Inf, 3, NA, 6, NA, NaN, 44, 23, 78),
var2 = c(7.8, NaN, 70, 1, 8, -Inf, NA, 99, 12, 3, 2.2))
```

As we can see, our data has a bunch of NA, NaN, Inf, and -Inf characters in it. Creating the data as a tibble allows us to quickly identify these characters as they are colored in red:

How many missing values and Inf’s are we dealing with?

Before removing rows with NA’s and Inf’s or trying to impute values it is best to see how many of these characters are in each columns. Below is some code to quickly evaluate this but if you have a larger data set and want to get other summary measures for each column you may want to check out one of my older posts on how to build a data dictionary.

```## Get a count of the number of NA, NaN, and Inf in each column
data.frame(NA_cols = sapply(df, function(x) sum(length(which(is.na(x))))),
NaN_cols = sapply(df, function(x) sum(length(which(is.nan(x))))),
Inf_cols = sapply(df, function(x) sum(length(which(is.infinite(x)))))) %>%
t()
```

The above code produces a table of counts of NA, NaN, and Inf’s in both of our columns (var1, var2).

Removing the rows with NA, NaN, and Inf

Sometimes, you might just want to remove the rows with missing values or Inf.

NOTE: If you are going to do this it is critical that you have a good reason to do so and you know exactly why you are doing it and how it might change any outcomes generated from your analysis!

There are two easy ways to do this in tidyverse. One way is to simply remove these characters one-by-one. In this way, you can simply indicate which column you want and which rows you’d like to remove.

```# One row at a time
df %>%
filter(!is.na(var1),
!is.na(var2),
!is.infinite(var1),
!is.infinite(var2))
```

The other option (useful if you have a data set with a large number of columns) is to pass the above functions to any columns that are integers.

```# All numeric rows at once
df %>%
filter_if(is.numeric, all_vars(!is.na(.))) %>%
filter_if(is.numeric, all_vars(!is.infinite(.)))
```

Both approaches produce the same result:

Replace NA, NaN, and Inf with a specific value

Finally, we can replace the NA, NaN, or Inf with a specific value. In the below code, I replace Na and NaN with the median value of the respective column and I replace Inf and -Inf with a 0. If you had an equation for imputing values, this would also be a way that you could do it.

```## Use tidyverse to convert NA and NaN to the median of column 1 and Inf to 0 in both columns

df <- df %>% mutate(var1_new = ifelse(is.na(var1), median(var1, na.rm = T), ifelse(
is.infinite(var1), 0, var1)),
var2_new = ifelse(is.na(var2), median(var2, na.rm = T), ifelse(
is.infinite(var2), 0, var2)))
```

After running that code we can see in the two new columns where the characters were replaced with the specified values.

Missing values can bring all sorts of issues in data clean up and analysis. Think hard before you decide to remove them or what approach you might take to impute values. In this case, we used median imputation for NA and NaN but there are a number of other approaches that are more rigorous and will offer more reliable results.

The code for this blog is available on my GITHUB Page.

# R Tips & Tricks: Setting up your data to compare yesterday to today

A common question sports scientists have is, “how did something that happened yesterday effect today?” For example, the sport scientist might be interested to know how yesterday’s training load influences tomorrow’s level of subjective soreness. In this case, the data is usually offset by a day, as we will see in the example.

Rather than wasting time each day copying and pasting values in excel (and potentially making a mistake), we can use the lag() function from the tidyverse package to manipulate our data into the format we need for analysis.

In today’s R Tips & Tricks blog post I’ll walk through three different approaches to doing this, each a little more complex than the last.

First, let’s load the packages we will need to manipulate our data set.

```## Load packages
library(tidyverse)
library(lubridate)
```

Example 1: Simple Example

First simulate some fake data

```day <- 1:10
trainingLoad <- round(rnorm(n = length(day), mean = 460, sd = 60))
soreness <- round(runif(n = length(day), min = 3, max = 6), 0)
df
```

We have 10 days recorded and we want to evaluate the training load from the previous day with the soreness of the next day. For example, the training load on day 1 needs to be compared to the level of soreness the next morning, on day 2. As such, the values of interest are offset by one day.

To solve this issue we can use the lag() function for training load. What this will do is take the value one row up and move it one row down.

```df %>%
```

Notice that now the 416 training load units on day 1 are on the same row as the soreness on day 2 in our new column, trainingLoad_lag.

Example 2: Working Across Weeks

The above example is rather simple and assumes that all training and soreness reporting take place on consecutive days. Unfortunately, in real life we are often dealing with training across multiple weeks where there may be days off between training sessions.

For example, the data might look like this:

```date <- c(seq(as.Date("2020/01/05"), as.Date("2020/01/08"), by = "days"),
seq(as.Date("2020/01/12"), as.Date("2020/01/15"), by = "days"))
trainingLoad <- round(rnorm(n = length(date), mean = 460, sd = 60), 0)
soreness <- round(runif(n = length(date), min = 3, max = 6), 0)
df
```

Let’s look what happens if we blindly apply the lag() function

```df %>%
```

Notice the issue here. We have a group of 4 consecutive training sessions that ends on 1/8/2020 and a second group of 4 consecutive sessions starting on 1/12/2020. As such, the lag function just works across the data set and makes the assumption that these are all consecutive days. If we analyzed this data in this fashion we might come up with strange outcomes given that the soreness experienced on 1/12/2020 might not be due to the session that happened 4 days ago on 1/8/2020.

We can solve this issue in one of two ways.

Fix #1: Always create a mesocycle variable in your data to represent the weeks. This will allow you to group_by() that variable.

```mesocycle <- rep(c(1, 2), each = 4)
df <- data.frame(mesocycle, df) df # Group by Mesocycle df %>%
group_by(mesocycle) %>%
```

After adding the mesocycle variable and then using it the group_by() function we achieve the correct data manipulation where day 1 of each of the training mesocycles starts with NA in the trainingLoad_lag column indicating that no session occurred the day prior.

Fix #2: Use the week() function from the lubridate package and have R automatically find the week of the year corresponding to the date of the training session.

```date <- c(seq(as.Date("2020/01/05"), as.Date("2020/01/08"), by = "days"),
seq(as.Date("2020/01/11"), as.Date("2020/01/14"), by = "days"))
trainingLoad <- round(rnorm(n = length(date), mean = 460, sd = 60), 0)
soreness <- round(runif(n = length(date), min = 3, max = 6), 0)
df

df <- df %>%
mutate(trainingWeek = week(date))

# Group by trainingWeek

df %>%
group_by(trainingWeek) %>%
```

Running all of the above code we find that the week() function identified the week of the year based on the date and then we were able to group_by() the training week variable to come to the same outcome as we did in Fix 1.

Example 3: Working Across Weeks with Multiple Athletes

```athlete <- rep(LETTERS[1:3], each = 8)
mesocycle <- rep(rep(c(1, 2), each = 4), times = 3)
date <- rep(c(seq(as.Date("2020/01/05"), as.Date("2020/01/08"), by = "days"),
seq(as.Date("2020/01/12"), as.Date("2020/01/15"), by = "days")), times = 3)
trainingLoad <- round(rnorm(n = length(date), mean = 460, sd = 60), 0)
soreness <- round(runif(n = length(date), min = 3, max = 6), 0)
df <- data.frame(athlete, mesocycle, date, trainingLoad, soreness)

# Group by athlete and mesocycle
df %>%
group_by(athlete, mesocycle) %>%