{"id":2289,"date":"2022-03-23T04:53:03","date_gmt":"2022-03-23T04:53:03","guid":{"rendered":"http:\/\/optimumsportsperformance.com\/blog\/?p=2289"},"modified":"2022-11-08T03:37:50","modified_gmt":"2022-11-08T03:37:50","slug":"confidence-intervals-for-random-forest-regression-using-tidymodels-sort-of","status":"publish","type":"post","link":"https:\/\/optimumsportsperformance.com\/blog\/confidence-intervals-for-random-forest-regression-using-tidymodels-sort-of\/","title":{"rendered":"Confidence Intervals for Random Forest Regression using tidymodels (sort of)"},"content":{"rendered":"<p>The random forest algorithm is an ensemble method that fits a large number of decision trees (weak learners) and uses their combined predictions, in a wisdom of the crowds type of fashion, to make the final prediction. Although random forest can be used for classification tasks, today I want to talk about using random forest for regression problems (problems where the variable we are predicting is a continuous one). Specifically, I&#8217;m not only interested in a single prediction but I also want to get a confidence interval for the prediction.<\/p>\n<p>In R, the two main packages for fitting random forests are {<strong>ranger} <\/strong>and {<strong>randomForest}.<\/strong> These packages are also the two engines available when fitting random forests in {tidymodels}. When building models in the native packages, prediction on new data can be done with the <strong>predict()<\/strong> function (similar to all models in R). To get an estimate of the variation in predictions, we pass the predict function the argument <strong>predict.all = TRUE<\/strong>, which produces a vector of all of the predictions made by each individual tree in the random forest. The problem, is that this argument is not available for <strong>predict()<\/strong> in {tidymodels}. Consequently, all we are left with in {tidymodels} is making a point estimate prediction (the average value of all of the trees in the forest)!!<\/p>\n<p>The way we can circumvent this issue is by fitting our model in {tidymodels} using cross-validation so that we can tune the <strong><em>mtry <\/em><\/strong>and <em><strong>trees <\/strong><\/em>values. Once we have the optimum values for these hyper-parameters we will use the {<strong>randomForest}<\/strong> package and build a new model using those values. We will then make our predictions with this model on new data.<\/p>\n<p><strong>NOTE: <\/strong><em>I&#8217;m not 100% certain this is the best way to approach this problem inside or outside of {tidymodels}. If someone has a better solution, please drop it into the comments section or shoot me an email!<\/em><\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Load Packages &amp; Data<\/strong><\/span><\/p>\n<p>We will use the <strong>mtcars<\/strong> data set and try and predict the car&#8217;s mpg from the disp, hp, wt, qsec, drat, gear, and carb columns.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Load packages\r\nlibrary(tidymodels)\r\nlibrary(tidyverse)\r\nlibrary(randomForest)\r\n\r\n## load data\r\ndf &lt;- mtcars %&gt;%\r\n  select(mpg, disp, hp, wt, qsec, drat, gear, carb)\r\n\r\nhead(df)\r\n<\/pre>\n<p><span style=\"text-decoration: underline;\"><strong>Split data into cross-validation sets<br \/>\n<\/strong><\/span><br \/>\nThis is a small data set, so rather than spending data by splitting it into training and testing sets, I&#8217;m going to use cross-validation on all of the available data to fit the model and tune the hyper-parameters<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Split data into cross-validation sets\r\nset.seed(5)\r\ndf_cv &lt;- vfold_cv(df, v = 5)\r\n<\/pre>\n<p><span style=\"text-decoration: underline;\"><strong>Specify the model type &amp; build a tuning grid<\/strong><\/span><\/p>\n<p>The model type will be a random forest regression using the <strong>randomForest <\/strong>engine.<\/p>\n<p>The tuning grid will be a vector of values for both <strong>mtry <\/strong>and <strong>trees<\/strong> to provide the model with options to try as it tunes the hyper-parameters.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## specify the random forest regression model\r\nrf_spec &lt;- rand_forest(mtry = tune(), trees = tune()) %&gt;%\r\n  set_engine(&quot;randomForest&quot;) %&gt;%\r\n  set_mode(&quot;regression&quot;)\r\n\r\n## build a tuning grid\r\nrf_tune_grid &lt;- grid_regular(\r\n  mtry(range = c(1, 7)),\r\n  trees(range = c(500, 800)),\r\n  levels = 5\r\n)\r\n<\/pre>\n<p><span style=\"text-decoration: underline;\"><strong>Create a model recipe &amp; workflow<\/strong><\/span><\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Model recipe\r\nrf_rec &lt;- recipe(mpg ~ ., data = df)\r\n\r\n## workflow\r\nrf_workflow &lt;- workflow() %&gt;%\r\n  add_recipe(rf_rec) %&gt;%\r\n  add_model(rf_spec)\r\n<\/pre>\n<p><span style=\"text-decoration: underline;\"><strong>Fit and tun the model<\/strong><\/span><\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## set a control function to save the predictions from the model fit to the CV-folds\r\nctrl &lt;- control_resamples(save_pred = TRUE)\r\n\r\n## fit model\r\nrf_tune &lt;- tune_grid(\r\n  rf_workflow,\r\n  resamples = df_cv,\r\n  grid = rf_tune_grid,\r\n  control = ctrl\r\n)\r\n\r\nrf_tune\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.10.25-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2290\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.10.25-PM-1024x354.png\" alt=\"\" width=\"579\" height=\"200\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.10.25-PM-1024x354.png 1024w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.10.25-PM-300x104.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.10.25-PM-768x266.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.10.25-PM-624x216.png 624w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.10.25-PM.png 1150w\" sizes=\"auto, (max-width: 579px) 100vw, 579px\" \/><\/a><\/p>\n<p><span style=\"text-decoration: underline;\"><strong>View the model performance and identify the best model<\/strong><\/span><\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## view model metrics\r\ncollect_metrics(rf_tune)\r\n\r\n## Which is the best model?\r\nselect_best(rf_tune, &quot;rmse&quot;)\r\n\r\n## Look at that models performance\r\ncollect_metrics(rf_tune) %&gt;%\r\n  filter(mtry == 4, trees == 725)\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.12.39-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2291\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.12.39-PM-1024x1008.png\" alt=\"\" width=\"533\" height=\"524\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.12.39-PM-1024x1008.png 1024w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.12.39-PM-300x295.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.12.39-PM-768x756.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.12.39-PM-624x614.png 624w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.12.39-PM.png 1122w\" sizes=\"auto, (max-width: 533px) 100vw, 533px\" \/><\/a><br \/>\nHere we see that the model with the lowest root mean squared error (rmse) has an <strong>mtry = 4<\/strong> and <strong>trees = 725<\/strong>.<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Extract the optimal mtry and trees values for minimizing<\/strong><\/span><strong> rmse<\/strong><\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Extract the best mtry and trees values to optimize rmse\r\nm &lt;- select_best(rf_tune, &quot;rmse&quot;) %&gt;% pull(mtry)\r\nt &lt;- select_best(rf_tune, &quot;rmse&quot;) %&gt;% pull(trees)\r\n\r\nm\r\nt\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.15.09-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2292\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.15.09-PM.png\" alt=\"\" width=\"85\" height=\"103\" \/><\/a><\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Re-fit the model using the optimal mtry and trees values<\/strong><\/span><\/p>\n<p>Now that we&#8217;ve identified the hyper-parameters that minimize rmse, we will re-fit the model using the {<strong>randomForest} <\/strong>package, so that we can get predictions for all of the trees, and specify the <strong>mtry <\/strong> and <strong>ntree <\/strong>values that were extracted from the {tidymodels} model within the function.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Re-fit the model outside of tidymodels with the optimized values\r\nrf_refit &lt;- randomForest(mpg ~ ., data = df, mtry = m, ntree = t)\r\nrf_refit\r\n<\/pre>\n<p><span style=\"text-decoration: underline;\"><strong>Create new data and make predictions<\/strong><\/span><\/p>\n<p>When making the predictions we have to make sure to pass the argument <strong>predict.all = TRUE<\/strong>.<\/p>\n<p>## New data<br \/>\nset.seed(859)<br \/>\nrow_id &lt;- sample(1:nrow(df), size = 5, replace = TRUE)<br \/>\nnewdat &lt;- df[row_id, ]<br \/>\nnewdat<\/p>\n<p>## Make Predictions<br \/>\npred.rf &lt;- predict(rf_refit, newdat, predict.all = TRUE)<br \/>\npred.rf<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>What do predictions look like?<\/strong><\/span><\/p>\n<p>Because we requested predict all, we have the ability to see a prediction for each of the 725 trees that were fit. Below we will look at the first and last 6 predictions of the 725 individual trees for the first observation in our new data set.<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\n## Look at all 725 predictions for the first row of the data\r\nhead(pred.rf$individual&#x5B;1, ])\r\ntail(pred.rf$individual&#x5B;1, ])\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.21.11-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2293\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.21.11-PM.png\" alt=\"\" width=\"552\" height=\"114\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.21.11-PM.png 920w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.21.11-PM-300x62.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.21.11-PM-768x159.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.21.11-PM-624x129.png 624w\" sizes=\"auto, (max-width: 552px) 100vw, 552px\" \/><\/a><\/p>\n<p><span style=\"text-decoration: underline;\"><strong>What do predictions look like<\/strong><\/span>?<\/p>\n<p>Taking the mean of the 725 predictions will produce the predicted value for the new observation, using the wisdom of the crowds. Similarly, the standard deviation of these 725 predictions will give us a sense for the variability of the weak learners. We can use this information to produce our confidence intervals. We calculate our confidence intervals as the standard deviation of predictions multiplied by the t-critical value, which we calculate from a t-distribution with the degrees of freedom equal to 725 &#8211; 1.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n# Average prediction -- what the prediction function returns\r\nmean(pred.rf$individual&#x5B;1, ])\r\n\r\n# SD of predictions\r\nsd(pred.rf$individual&#x5B;1, ])\r\n\r\n# get t-critical value for df = 725 - 1\r\nt_crit &lt;- qt(p = 0.975, df = t - 1)\r\n\r\n# 95% CI\r\nmean(pred.rf$individual&#x5B;1, ]) - t_crit * sd(pred.rf$individual&#x5B;1, ])\r\nmean(pred.rf$individual&#x5B;1, ]) + t_crit * sd(pred.rf$individual&#x5B;1, ])\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.30.02-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2294\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.30.02-PM-1024x539.png\" alt=\"\" width=\"564\" height=\"297\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.30.02-PM-1024x539.png 1024w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.30.02-PM-300x158.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.30.02-PM-768x404.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.30.02-PM-624x328.png 624w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.30.02-PM.png 1034w\" sizes=\"auto, (max-width: 564px) 100vw, 564px\" \/><\/a><\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Make a prediction with confidence intervals for all of the observations in our new data<\/strong><\/span><\/p>\n<p>First we will make a single point prediction (the average, wisdom of the crowds, prediction) and then we will write a <strong>for()<\/strong> loop to create the lower and upper 95% Confidence Intervals using the same approach as above.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Now for all of the predictions\r\nnewdat$pred_mpg &lt;- predict(rf_refit, newdat)\r\n\r\n## add confidence intervals\r\nlower &lt;- rep(NA, nrow(newdat))\r\nupper &lt;- rep(NA, nrow(newdat))\r\n\r\nfor(i in 1:nrow(newdat)){\r\n  lower&#x5B;i] &lt;- mean(pred.rf$individual&#x5B;i, ]) - t_crit * sd(pred.rf$individual&#x5B;i, ])\r\n  upper&#x5B;i] &lt;- mean(pred.rf$individual&#x5B;i, ]) + t_crit * sd(pred.rf$individual&#x5B;i, ])\r\n}\r\n\r\nnewdat$lwr &lt;- lower\r\nnewdat$upr &lt;- upper\r\n<\/pre>\n<p><span style=\"text-decoration: underline;\"><strong>View the new observations with their predictions and create a plot of the predictions versus the actual data<\/strong><\/span><\/p>\n<p>The three columns on the right show us the predicted miles per gallon and the 95% confidence interval for each of the five new observations.<span style=\"text-decoration: underline;\"><strong><br \/>\n<\/strong><\/span><\/p>\n<p>The plot shows us the point prediction and confidence interval along with the actual mpg (in red), which we can see falls within each of the ranges.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Look at the new observations, predctions and confidence intervals and plot the data\r\n## new data\r\nnewdat\r\n\r\n## plot\r\nnewdat %&gt;%\r\n  mutate(car_type = rownames(.)) %&gt;%\r\n  ggplot(aes(x = pred_mpg, y = reorder(car_type, pred_mpg))) +\r\n  geom_point(size = 5) +\r\n  geom_errorbar(aes(xmin = lwr, xmax = upr),\r\n                width = 0.1,\r\n                size = 1.3) +\r\n  geom_point(aes(x = mpg),\r\n             size = 5,\r\n             color = &quot;red&quot;) +\r\n  theme_minimal() +\r\n  labs(x = &quot;Predicted vs Actual MPG&quot;,\r\n       y = NULL,\r\n       title = &quot;Predicted vs Actual (red) MPFG from Random Forest&quot;,\r\n       subtitle = &quot;mpg ~ disp + hp + wt + qsec + draft + gear + carb&quot;)\r\n\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.34.17-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-2295\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.34.17-PM-1024x186.png\" alt=\"\" width=\"625\" height=\"114\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.34.17-PM-1024x186.png 1024w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.34.17-PM-300x54.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.34.17-PM-768x139.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.34.17-PM-624x113.png 624w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.34.17-PM.png 1214w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.38.36-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-2296\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.38.36-PM-1024x771.png\" alt=\"\" width=\"625\" height=\"471\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.38.36-PM-1024x771.png 1024w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.38.36-PM-300x226.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.38.36-PM-768x578.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.38.36-PM-624x470.png 624w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/03\/Screen-Shot-2022-03-22-at-9.38.36-PM.png 1984w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Wrapping Up<\/strong><\/span><\/p>\n<p>Random forests can be used for regression or classification problems. Here, we used the algorithm for regression with the goal of obtaining 95% Confidence Intervals based on the variability of predictions exhibited by all of the trees in the forest. Again, I&#8217;m not certain that this is the best way to achieve this output either inside of or outside of {tidymodels}. If anyone has other thoughts, feel free to drop them in the comments or shoot me an email.<\/p>\n<p>To access all of the code for this article, please see my <strong><span style=\"color: #0000ff;\"><a style=\"color: #0000ff;\" href=\"https:\/\/github.com\/pw2\/tidymodels_template\/blob\/main\/Confidence%20Intervals%20for%20Random%20Forest%20Regression%20using%20tidymodels%20(sort%20of).R\">GITHUB page<\/a><\/span><\/strong>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The random forest algorithm is an ensemble method that fits a large number of decision trees (weak learners) and uses their combined predictions, in a wisdom of the crowds type of fashion, to make the final prediction. Although random forest can be used for classification tasks, today I want to talk about using random forest [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[47,45],"tags":[],"class_list":["post-2289","post","type-post","status-publish","format-standard","hentry","category-model-building-in-r","category-r-tips-tricks"],"_links":{"self":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts\/2289","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/comments?post=2289"}],"version-history":[{"count":2,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts\/2289\/revisions"}],"predecessor-version":[{"id":2298,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts\/2289\/revisions\/2298"}],"wp:attachment":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/media?parent=2289"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/categories?post=2289"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/tags?post=2289"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}