R Tips & Tricks: Pearson Correlation from First Principles

In our most recent TidyX screen cast we talked a bit about correlation, which compelled me to expand on Pearson’s correlation to show how you can do things from first principles in R, by writing your own custom functions. This tutorial covers:

  • A brief explanation of Pearson’s correlation
  • R functions available to obtain Pearson’s correlation coefficient
  • How to write your own custom functions to understand the underlying principles of Pearson’s correlation coefficient and it’s confidence intervals

Pearson’s Correlation

Read almost any scientific journal article and you are sure to encounter Pearson’s correlation coefficient (denoted as r), as it is a commonly used metric for describing relationships within ones data. Take any basic stats course and you are bound to hear the phrase, “Correlation does not equal causation”, due to the fact that just because things are correlated does not mean that one definitely caused the other (such a relationship would need to be teased out of more specifically and the natural world is full of all sorts of random correlations).

Pearson’s correlation is a descriptive statistic used to quantify the linear relationship between two variables. It does so by measuring how much two variables covary with each other. The correlation coefficient is scaled between -1 and 1 where 1 means that the two variables have a perfect positive relationship (as one variable increases the other variable also always increases) while -1 means that the two variables have a perfect negative relationship (as one variable decreases the other variable also always decreases). A correlation coefficient of 0 suggests that the two variables do not share a linear relationship. It is rare that we see completely perfect correlations (either 1 or -1) and often our data will present with some scatter suggesting a specific trend (positive or negative) but with some amount of variability.

Data Preparation

R offers a few convenient functions for obtaining the correlation coefficient between two variables. For this tutorial, we will work with the Lahman baseball data set, which is freely available in R once you have installed the {Lahman} package. We will also use the {tidyverse} package for data manipulation and visualization.

We will use the Batting data set, which contains historic batting statistics for players, and the Master data set, which contains meta data on the players (e.g., birth year, death year, hometown, Major League debut, etc).

I do a little bit of data cleaning to obtain only the batting statistics from the 2016 season for those that had at least 200 at bats and I join that data with the player’s birth year from the Master data set. The two hitting variables I’ll use for this tutorial are Hits and RBI. One final thing I do is create a quantile bin for the player’s age so that I can look at correlation across age groups later in the tutorial.

Here is what the data looks like so far:

Data Visualization

Once we have pre-processed the data we can visualize it to see how things look. We will plot two visuals: (1) the number of players per age group; and, (2), a plot of the linear relationship between Hits and RBI.


Correlation in R

R offers a few convenient functions for obtaining the correlation coefficient between two variables. The two we will use are cor() and cor.test(). The only arguments you need to pass to these two functions is the X and Y variables you are interested in. The first function will produce a single correlation coefficient while the latter function will produce the correlation coefficient along with confidence intervals and information about the hypothesis test. Here is what their respective outputs looks like for the correlation between Hits and RBI:

Pretty easy! We can see that the correlation coefficient is the same between both functions, as it should be (r = 0.82). In the bottom output we can see that the 95% Confidence Intervals range from r = 0.78 to r = 0.85. The correlation is high between these two variables but it is not perfect, which we can better appreciate from the scatter plot above showing variability around the regression line.

Correlation from First Principles

One of the best ways to understand what is going on behind the custom functions in your stats program (no matter if it is R, Python, SPSS, SAS, or even Excel) is to try and build your own functions by hand. Doing so gives you an appreciation for some of the inner workings of these statistics and will help you better understand more complex statistics latter on.

I’ll build two functions:

  1.  A function to calculate the Pearson’s correlation coefficient
  2. A function to calculate the confidence intervals around the correlation coefficient

Pearson’s Correlation Coefficient Function

  • Similar to the built in R functions, this function will take inputs of an X and Y variable.
  • You can see the math for calculating the correlation coefficient in the function below. We start by subtracting the mean for each column from each observation. We then multiply the differences for each row and then produce a column of squared differences for each variable. Those values provide the inputs for the correlation coefficient in the second to last line of the function.
cor_function <- function(x, y){
  
  dat <- data.frame(x, y)
  dat <- dat %>%
    mutate(diff_x = x - mean(x),
           diff_y = y - mean(y),
           xy = diff_x * diff_y,
           x2 = diff_x^2,
           y2 = diff_y^2)
  
  r <- sum(dat$xy) / sqrt(sum(dat$x2) * sum(dat$y2))
  return(r)
}

Confidence Interval for Pearson’s R

  • This function takes three inputs: (1) the correlation coefficient between two variables (calculated above); (2) the sample size (the number of observations between X and Y); and, (3) The Confidence Level of Interest.
  • NOTE: To input a confidence level of interest I only set it up for three options, 0.9, 0.95, or 0.99, for the 90%, 95% and 99% Confidence Interval respectively. I could set it up to take any level of confidence and calculate the appropriate critical value but the function (as you can see) was already getting long and messy so I decided to cut it off and keep it simple for illustration purposes here.
  • This function is a bit more involved than the previous one and that’s because we require some transformations of the data. We have to transform the correlation coefficient using Fisher’s Z-Transformation in order to create a normal distribution. From there we can calculate our confidence intervals and then back transform the values so that they are on the scale of r.
cor.CI <- function(r, N, CI){
  
  fisher.Z <- .5*log((1+r)/(1-r))
  
  se.Z <- sqrt(1/(N-3))
  
  if(CI == .9){
    MOE <- 1.65*se.Z}
  else {
    if(CI == .95){
      MOE <- 1.95*se.Z}
    else {
      if(CI ==.99){
        MOE <- 2.58*se.Z}
      else{
        NA
      }
    }
  }
  
  Lower.Z <- fisher.Z - MOE
  Upper.Z <- fisher.Z + MOE
  Lower.cor.CI <- (exp(2*Lower.Z)-1)/(exp(2*Lower.Z)+1)
  Upper.cor.CI <- (exp(2*Upper.Z)-1)/(exp(2*Upper.Z)+1)
  Correlation.Coefficient.CI <- data.frame(r, Lower.cor.CI, Upper.cor.CI)
  Correlation.Coefficient.CI <- round(Correlation.Coefficient.CI, 3)
  return(Correlation.Coefficient.CI)
  }


Seeing our Functions in Action

We’ve obtained similar results to what was produced from the custom R functions!

We can also look at correlation across age bins. To do this, we will use {tidyverse} so that we can group_by() the age bins that we predefined.

First with the built in R functions

  • Using the built in cor.test() function we need to call the confident interval specifically from the output and specifying [1] and [2] tells R that we want the first value (lower confidence interval) and the second value (higher confidence interval), respectively.
yr2016 %>%
  group_by(AgeBin) %>%
  summarize(COR = cor(H, RBI),
            COR_Low_CI = cor.test(H, RBI)$conf.int[1],
            COR_High_CI = cor.test(H, RBI)$conf.int[2])

Now with our custom R functions

  • Similar to above, we need to extract the confidence interval out of the confidence interval function’s output. In this case, if you recall, there were three outputs (correlation coefficient, Low CI, and High CI). As such, we want to indicate [2] and [3] for the lower and upper confidence interval, respectively, since that is where they are located in the model output.
yr2016 %>%
  group_by(AgeBin) %>%
  summarize(COR = cor_function(H, RBI),
            cor.CI(r = COR,
                   N = n(),
                   CI = 0.95)[2],
            cor.CI(r = COR,
                   N = n(),
                   CI = 0.95)[3])

 

Conclusion

Hopefully this tutorial was useful in helping you to understand Pearson’s correlation and how easy it is to write functions in R that allow us to explore our data from first principles. All of the code for this tutorial is available on my GitHub page.

 

TidyX Episode 17: Regression, KMeans Clustering, & PCA

This week, Ellis Hughes and I explain the code Rebecca Stevick, who shows us how to plot a linear regression model with the regression line, regression equation, and correlation coefficient all conveniently visualized on the plot. The plot was created using data on The Uncanny X-Men comic books and was supplied by the TidyTuesday Project.

Following Rebecca’s code we delve into other ways of looking at the regression equation and discuss using Ellis’ R package, {colortable}, to produce conditionally formatted tables for model outputs. We then move on to using the X-Men data to build and visualize a KMeans Cluster and PCA.

The episode is a little longer than usual (50 minutes) but combines a number of different thoughts around coding and visualizing statistical models in R.

R Tips & Tricks: Summarizing & Visualizing Data

Summarizing and visualizing your data is a critical first step of analysis. For new PhD students and those new to R, I’ve put together a few of the common approaches one might take when dealing with data. This tutorial covers:

  • Manipulating/reshaping a data frame (from wide format to long format).
  • Basic functions for obtaining summary statistics in your data .
  • Visualization strategies such as boxplots, histograms, violin plots, joint plots, and plots showing inter-individual differences.

Instead of putting all of the code into this blog post I’ll just highlight some of the key pieces along the way. If you want all of the code, simply head over to my GitHub page.

Data

In this tutorial, we will simulate data where twenty participants are randomized  to either a traditional (IE, linear) or block periodization program for 16-weeks. The participants had their squat tested pre and post training program intervention and the difference between the two tests is our outcome of interest. (NOTE: I drew random values for pre- and post-squat for both groups in this example, so it isn’t completely life like, where the post-test would normally be related in some way to the pre-test. In future simulations that discuss analysis I will use more realistic outcomes).

Here is a snip of what the simulation code and first few rows of the data look like:

Packages

This analysis will use three R packages (make sure you have installed them prior to running the code):

  1. {tidyverse} – Used for data manipulation and visualization
  2. {gridExtra} – Used for organizing the plot grid when creating our joint plot
  3. {psych} – Used to produce simple summary statistics

Manipulating Data

Occasionaly, you may need to manipulate your data from a wide format to a long format, or vice versa, as some types of analysis or visualizations are made easier when the data is in a specific format.

In our case, we have the data in a wide format. In this format, we have one row per participant and the 2 columns of interest are the pre- and post-squat columns (see above example of the first 6 rows). This can be accomplished with the pivot_longer() function in {tidyverse}. This function takes four primary arguments:

  1. The data frame where our wide format data is.
  2. The columns of interest from our wide data frame that we want to pivot into a single column. In this case, “pre_squat” and “post_squat” columns are the ones we want to pivot.
  3. the names_to argument is where we specify the name of the column where the pre_squat and post_squat columns will be pivoted into.
  4. The values_to column is where we specify the name of the column that we want the values of our participants pre_squat and post_squat values to reside under in the long data frame.

The code looks like this:

dat_long <- pivot_longer(data = dat, 
             cols = c("pre_squat", "post_squat"),
             names_to = "test",
             values_to = "performance")

The first few rows of the long format data frame look like this:

Notice that now each participant has two rows of data, one for their pre-squat and one for their post-squat.

If we have stored our data in a long format and we need to go back to a wide format, we can simply use pivot_wider() from {tidyverse}. This function takes three primary arguments:

  1. The data frame where our long format data is.
  2. The names_from argument, which specifies the column in the long data frame that has the variables we’d like to pivot out into their own columns in a wide format (in this case, the “test” column contains the information about whether the test performed was pre or post, so we want to pivot that out to two new columns).
  3. The values_from argument specifies which values we want to fill under the corresponding columns we are pivoting. In this case, the ‘performance’ column has the data specific to the pre- and post-squat variables that we are pivoting to their own columns.

The code looks like this:

dat_wide <- pivot_wider(data = dat_long,
                        names_from = test,
                        values_from = performance)

The first few rows of the new wide data frame looks like this:

Notice that the data looks exactly like our initial data frame (as you would expect given that original data was already a wide data frame) with one row per participant.

Producing Summary Statistics

Now that we’ve walked through some simple approaches to manipulating data frames, I’ll detail a few easy ways of summarizing your data.

The {psych} package has two very simple functions, describe() and describeBy(), which come in handy when you need summary statistics, info about the range of the data, the standard error of the data, and some metrics about the distribution of your data.

describe() works with all of the data in a column. All you need to do is specify the data frame and the column of interest (separate those two by a $ sign). In this case, we will use the long data frame and produce summary statistics for the performance of pre-and post-squat performance across all subjects.

describe(dat_long$performance)


describeBy()
functions similar to describe(); however, it allows for a second argument identifying the group that you’d like to produce summary statistics for. In this case, we will produce summary statistics for all pre- and post-squat performance data described by group (traditional or block periodization).

describeBy(dat_long$performance, dat_long$group)

What if we want to look at summary statistics by group but further group by pre- and post-squat performance? In this instance, we can code our own summary statistics using {tidyverse}. {tidyverse} is one of the best packages for data manipulation, data clean up, and data visualization as it contains a host of other packages that contain functions for these tasks.

In the below code I start with the data frame that has the data I want to summarize and I ‘pipe’ together different functions using the %>% command. This allows me to iterate very quickly on a data set and intuitively build analysis as I go. In this example I first call the mutate() function (used to add a new column to my already existing data set) and inside of it I re-level my factors (since R automatically stores them in alphabetical order) as I want my results returned with the pre-squat performance first followed by the post-squat performance. After that step, I indicate that I want to group_by() both my periodization groups (traditional and block) and my test (pre and post). Finally, I use the summarize() function to create a summary data frame where I am specifying N = n(), in order to get the sample size counted in each group of my group_by(), as well as the mean and the standard deviation for each of the group_by() groups.

dat_long %>%
  mutate(test = fct_relevel(test, levels = c("pre_squat", "post_squat"))) %>%
  group_by(group, test) %>%
  summarize(N = n(),
            Mean = mean(performance),
            SD = sd(performance))

 

NOTE: In my R script on GItHub I also create a “difference” column (post – pre) and perform the same descriptive operations on that column as above. We will use this column below to produce quantiles as well as for our data visualization purposes but feel free to work through the examples in the R code for using the above functions.

Producing quantiles is easy with the quantile() function from base R. Similar to describe() above, pass the function the data and the column of interest, separated by a $ sign, to get your result. Here we will get the quantiles of the differences between post and pre-squat strength for all participants.

quantile(dat$Diff)

If we’d like to see these differences by group, we can use the by() function, which allows us to pass a function (in this case quantile()) to an entire data frame. As such, we will get the quantiles for the difference in performance by group.

by(dat$Diff, dat$group, quantile)


Note: In the R script on my GitHub page I walk through how to create a sequence of numbers (form 0 to 1) and specify a broader range of quantiles to be returned, should you need them.

Data Visualization

Below are a few ways that we can visualize the between group differences (the code for these is located at GitHub). All of the coding was done using {tidyverse}.

Boxplots

Boxplots are for showing the quantiles of your data but can lack context. The box represents the interquartile range (25-75) and the black line within the box represents the median value. All of these values as well as the smallest and largest values were obtained in the quantile() function in the previous section. We can see that the median value of the difference in squat performance for the block periodization group is higher than that of the traditional periodization group.

Overlapping Histograms

Overlapping histograms are useful for showing distributions and the difference between two groups. The dotted red line is set at a difference of 0 to represent no change from pre- to post-squat test performance. The alpha argument is set below 1 inside the geom_histogram() function of the code allow some opaqueness of the histograms. This way, we can see both distributions clearly. Similar to the boxplots, we can see that the median value of the difference in squat performance for the block periodization group is higher than that of the traditional periodization group.

Violin Plots with Points

Violin plots are a nice balance between boxplots and histograms as they are essentially two histograms mirroring each other. As such, you get tan appreciation of the data distribution, as you would with a histogram, while visualizing the groups side-by-side, as you would with a boxplot. I kept all of the data ponts on the plot as well, to allow the reader to see each participants performance and I placed a thick black point in the middle of the violin to represent the group median.

Joint Plot

Joint plots are a nice way to visualize continuous data where you have an ‘x’ and ‘y’ variable and want to additionally reflect the distribution of each variable with a histogram in the margins. In this plot, I placed the pre-squat performance on the x-axis and the post-squat performance on the y-axis and colored the points by which group the participant was in. This plot was constructed in {tidyverse} but requires the {gridExtra} package to arrange the histograms for the x and y variables in the margin. Additionally, you’ll want to create an empty plot to take up space on the grid so that everything lines up. This is all detailed in the code.

Inter-individual Differences

Finally, as we can see from all of the above plots, while the average performance in the block periodization group was greater than that of the traditional periodization group, there are large distributions in both groups indicating that some participants improved, others did not, and some stayed the same. We can visualize these inter-individual responses by creating a visaluzation that exposes each participant’s performance on both tests.

Conclusion

Hopefully this post serverse as a jumping off point for those looking to get started with analyzing data in R. As I stated at the start, all data can be obtained at my GitHub page.

TidyX Episode 16: Web Scraping & NBA Shot Charts

This week, Ellis Hughes and I start by breaking down the code that Jihong Zhang wrote to visualize Caribou Movements in Canada from data provided by the TidyTuesday Project. The data is spatial tracking data and Jihong plotted this data over top of a google map. Since spatial data is currently very popular in sport, we decided to create our own plots of NBA Shot Charts using three different approaches (scatter plots, hexbins, and heat maps). To obtain this data, we walk through our code on web scraping.

This screen cast covers a number of key topics in data science:

1. Obtaining data via web scraping.
2. Dealing with regularized expressions.
3. Visualizing data.
4. Some things to consider when joining tables (NOTE: I did a BLOG ARTICLE a few months ago that details the various JOIN functions in {tidyverse}, so it may be worthwhile to check that out).

To watch the screen cast, CLICK HERE.

To access our code, CLICK HERE.

TidyX Episode 15: Juneteenth & Census Tables

In TidyX Episode 15, Ellis Hughes shows us how to quickly build a report of conditionally formatted data tables using the {colortable} and  {knitr} packages. Also, a quick tip on how to create a table of contents within your {Rmarkdown} reports to allow your readers an easy way to navigate the data. This weeks data comes from the TidyTuesday Project and uses census data to show America’s history with slavery in different regions across the country.

To watch our screen cast, CLICK HERE.

For our r code, CLICK HERE.