TidyX 77: Intro to tidymodels

Ellis Hughes and I just wrapped up our series on using SQL in R and have decided to move on to doing a series on tidymodels.

For those that don’t know, tidymodels is an approach to building machine learning models in R using tidyverse principles. Up until this point, most of our model building has been in either the native package for the given model or using the caret package (which tidymodels has now replaced). So, we are super excited to get into the tidymodels framework and learn along with you! Each week we will try and build on a different component of modeling within tidymodels.

This first week is a basic introduction to tidymodels and the broom package (which is automatically loaded with tidymodels). We cover how to set up the model and obtain the model outputs in a nice tidy manner.

To watch our screen cast, CLICK HERE.

To access our code, CLICK HERE.

TidyX 76: Polling databases for a multi-user interactive shiny table

Ellis Hughes and I wrap up our SQL database/shiny series by taking a question from one of our viewers.

In TidyX 75, we built a {shiny} app that allowed the user to update a table and save the results back to the database. One of the viewers asked if we could address the issue of multiple users editing the table simultaneously, ultimately canceling out the notes that they are both writing to the database. So, we have addressed this in our recent episode.

To watch the screen cast, CLICK HERE.

To access the code, CLICK HERE.

The Nordic Hamstring Exercise, Hamstring Strains, & the Scientific Process

Doing science in lab is hard.

Doing science in the applied environment is hard — maybe even harder than the lab, at times, due to all of the variables you are unable to control for.

Reading and understanding science is also hard.

Let’s face it, science is tough! So tough, in fact, that scientists themselves have a difficult time with all of the above, and they do this stuff for a living! As such, to keep things in check, science applies a peer-review process to ensure that a certain level of standard is upheld.

Science has a very adversarial quality to it. One group of researchers formulate a hypothesis, conduct some research, and make a claim. Another group of researchers look at that claim and say, “Yeah, but…”, and then go to work trying to poke holes in it, looking to answer the question from a different angle, or trying to refute it altogether. The process continues until some type of consensus is agreed upon within the scientific community based on all of the available evidence.

This back-and-forth tennis match of science has a lot to teach those looking to improve their ability to read and understand research. Reading methodological papers and letters to the editor offer a glimpse into how other, more seasoned, scientists think and approach a problem. You get to see how they construct an argument, deal with a rebuttal, and discuss the limitations of both the work they are questioning and the work they are conducting themselves.

All of this brings me to a recent publication from Franco Impellizzeri, Alan McCall, and Maarten van Smeden, Why Methods Matter in a Meta-Analysis: A reappraisal showed inconclusive injury prevention effect of Nordic hamstring exercise.

The paper is directed at a prior meta-analysis which aggregated the findings of several studies with the aim of understanding the role of the Nordic hamstring exercise (NHE) in reducing the risk hamstring strain injuries in athletes. In a nutshell, Impellizzeri and colleagues felt like the conclusions and claims made from the original meta-analysis were too optimistic, as the title of the paper suggested that performing the NHE can halve the rate of hamstring injuries (Risk Ratio: 0.49, 95% CI: 0.32 to 0.74). Moreover, Impellizzeri et al, identified some methodological flaws with regard to how the meta-analysis was performed.

The reason I like this paper is because it literally steps you through the thought process of Impellizzeri and colleagues. First, it discusses the limitations that they feel are present in the previous meta-analysis. They then conduct their own meta-analysis by re-analyzing the data, however, they apply an inclusion criteria that was more strict and, therefore, only included 5 papers from the original study (they also identified and included a newer study that met their criteria). In analyzing the original five papers, they found prediction intervals ranging from 0.06 to 5.14. (Side Note: Another cool piece of this paper is the discussion and reporting of both confidence intervals and prediction intervals, the latter of which are rarely discussed in the sport science literature).

The paper is a nice read if you want to see the thought process around how scientists read research and go about the scientific process of challenging a claim.

Some general thoughts

  • Wide prediction intervals leave us with a lot of uncertainty around the effectiveness of NHE in reducing the risk of hamstring strain injuries. At the lower end of the interval NHE could be beneficial and protective for some while at the upper end, potentially harmful to others.
  • A possible reason for the large uncertainty in the relationship between NHE and hamstring strain injury is that we might be missing key information about the individual that could indicate whether the exercise would be helpful or harmful. For example, context around their previous injury history (hamstring strains in particular), their training age, or other variables within the training program, all might be useful information to help paint a clearer picture about who might benefit the most or the least.
  • Injury is highly complex and multi-faceted. Trying to pin a decrease in injury risk to a single exercise or a single type of intervention seems like a bit of a stretch.
  • No exercise is a panacea and we shouldn’t treat them as such. If you like doing NHE for yourself (I actually enjoy it!) then do it. If you don’t, then do something else. Let’s not believe that certain exercises have magical properties and let’s not fight athletes about what they are potentially missing from not including an exercise in their program when we don’t even have good certainty on whether it is beneficial or not.

TidyX 74: Joins in SQL vs Local R Environment

This week, Ellis Hughes and I continue our SQL in R series by discussing an important database task, joining two tables. Some of the things we cover:

  • The different types of joins (LEFT JOIN, RIGHT JOIN, FULL JOIN, INNER JOIN, and ANTI JOIN)
  • Doing a JOIN using SQL versus the local R environment and when you might choose one over the other
  • Using the {microbenchmark} package to test which query is faster and performing optimally
  • Finally, if you’d like more explicit info on creating JOINS in R, I wrote a blog post a little over a year ago that covers this topic in more detail, CLICK HERE.

To watch our screen cast, CLICK HERE.

To access our code, CLICK HERE.