{"id":2545,"date":"2022-07-08T06:49:12","date_gmt":"2022-07-08T06:49:12","guid":{"rendered":"http:\/\/optimumsportsperformance.com\/blog\/?p=2545"},"modified":"2022-11-08T03:34:42","modified_gmt":"2022-11-08T03:34:42","slug":"bayesian-simple-linear-regression-by-hand-gibbs-sampler","status":"publish","type":"post","link":"https:\/\/optimumsportsperformance.com\/blog\/bayesian-simple-linear-regression-by-hand-gibbs-sampler\/","title":{"rendered":"Bayesian Simple Linear Regression by Hand (Gibbs Sampler)"},"content":{"rendered":"<p>Earlier this week, <span style=\"color: #0000ff;\"><strong><a style=\"color: #0000ff;\" href=\"https:\/\/optimumsportsperformance.com\/blog\/making-predictions-with-a-bayesian-regression-model\/\">I briefly discussed a few ways of making various predictions from a Bayesian Regression Model<\/a><\/strong><\/span>. That article took advantage of the Bayesian scaffolding provided by the {<strong>rstanarm<\/strong>} package which runs {<strong>Stan<\/strong>} under the hood, to fit the model.<\/p>\n<p>As is often the case, when possible, I like to do a lot of the work by hand &#8212; partially because it helps me learn and partially because I&#8217;m a glutton for punishment. So, since we used {<strong>rstanarm<\/strong>} last time I figured it would be fun to write our own Bayesian simple linear regression by hand using a Gibbs sampler.<\/p>\n<p>To allow us to make a comparison to the model fit in the previous article, I&#8217;ll use the same data set and refit the model in {<strong>rstanarm<\/strong>}.<\/p>\n<p><strong>Data &amp; Model<\/strong><\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\nlibrary(tidyverse)\r\nlibrary(patchwork)\r\nlibrary(palmerpenguins)\r\nlibrary(rstanarm)\r\n\r\ntheme_set(theme_classic())\r\n\r\n## get data\r\ndat &lt;- na.omit(penguins)\r\nadelie &lt;- dat %&gt;% \r\n  filter(species == &quot;Adelie&quot;) %&gt;%\r\n  select(bill_length_mm, bill_depth_mm)\r\n\r\n## fit model\r\nfit &lt;- stan_glm(bill_depth_mm ~ bill_length_mm, data = adelie)\r\nsummary(fit)\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Build the Model by Hand<\/strong><\/span><\/p>\n<p><em><strong>Some Notes on the Gibbs Sampler<\/strong><\/em><\/p>\n<ul>\n<li>A Gibbs sampler is one of several Bayesian sampling approaches.<\/li>\n<li>The Gibbs sampler works by iteratively going through each observation, updating the previous prior distribution and then randomly drawing a proposal value from the updated posterior distribution.<\/li>\n<li>In the Gibbs sampler, the proposal value is accepted 100% of the time. This last point is where the Gibbs sampler differs from other samples, for example the Metropolis algorithm, where the proposal value drawn from the posterior distribution is compared to another value and a decision is made about which to accept.<\/li>\n<li>The nice part about the Gibbs sampler, aside from it being easy to construct, is that it allows you to estimate multiple parameters, for example the mean and the standard deviation for a normal distribution.<\/li>\n<\/ul>\n<p><em><strong>What&#8217;s needed to build a Gibbs sampler?<\/strong><\/em><\/p>\n<p>To build the Gibbs sampler we need a few values to start with.<\/p>\n<ol>\n<li>We need to set some priors on the intercept, slope, and sigma value. This isn&#8217;t different from what we did in {<strong>rstanarm<\/strong>}; however, recall that we used the default, weakly informative priors provided by the {<strong>rstanarm<\/strong>} library. Since we are constructing our own model we will need to specify the priors ourselves.<\/li>\n<li>We need the values of our observations placed into their own respective vectors.<\/li>\n<li>We need a start value for the intercept and slope to help get the process going.<\/li>\n<\/ol>\n<p>That&#8217;s it! Pretty simple. Let&#8217;s specify these values so that we can continue on.<\/p>\n<p><em><strong>Setting our priors<\/strong><\/em><\/p>\n<p>Since we have no real prior knowledge about the bill depth of Adelie penguins and don&#8217;t have a good sense for what the relationship between bill length and bill depth is, we will set our own weakly informative priors. We will specify both the intercept and slope to be normally distributed with a mean of 0 and a standard deviation of 30. Essentially, we will <em>let the data speak<\/em>. One technical note is that I am converting the standard deviation to precision, which is nothing more than 1 \/ variance (and recall that variance is just standard deviation squared).<\/p>\n<p>For our sigma prior (which I refer to as tau, below) I&#8217;m going to specify a gamma prior with a shape and rate of 0.01.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## set priors\r\nintercept_prior_mu &lt;- 0\r\nintercept_prior_sd &lt;- 30\r\nintercept_prior_prec &lt;- 1\/(intercept_prior_sd^2)\r\n\r\nslope_prior_mu &lt;- 0\r\nslope_prior_sd &lt;- 30\r\nslope_prior_prec &lt;- 1\/(slope_prior_sd^2)\r\n\r\ntau_shape_prior &lt;- 0.01\r\ntau_rate_prior &lt;- 0.01\r\n<\/pre>\n<p>Let&#8217;s plot the priors and see what they look like.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## plot priors\r\nN &lt;- 1e4\r\nintercept_prior &lt;- rnorm(n = N, mean = intercept_prior_mu, sd = intercept_prior_sd)\r\nslope_prior &lt;- rnorm(n = N, mean = slope_prior_mu, sd = slope_prior_sd)\r\ntau_prior &lt;- rgamma(n = N, shape = tau_shape_prior, rate = tau_rate_prior)\r\n\r\npar(mfrow = c(1, 3))\r\nplot(density(intercept_prior), main = &quot;Prior Intercept&quot;, xlab = )\r\nplot(density(slope_prior), main = &quot;Prior Slope&quot;)\r\nplot(density(tau_prior), main = &quot;Prior Sigma&quot;)\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.31.22-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-2546\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.31.22-PM-1024x600.png\" alt=\"\" width=\"625\" height=\"366\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.31.22-PM-1024x600.png 1024w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.31.22-PM-300x176.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.31.22-PM-768x450.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.31.22-PM-624x366.png 624w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.31.22-PM.png 1618w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><\/p>\n<p><em><strong>Place the observations in their own vectors<\/strong><\/em><\/p>\n<p>We will store the bill depth and length in their own vectors.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## observations\r\nbill_depth &lt;- adelie$bill_depth_mm\r\nbill_length &lt;- adelie$bill_length_mm\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><em><strong>Initializing Values<\/strong><\/em><\/p>\n<p>Because the model runs iteratively, using the data in the previous row as the new prior, we need to get a few values to help start the process before progressing to our observed data, which would be row 1. Essentially, we need to get some values to give us a row 0. We will want to start with some reasonable values and let the model run from there. I&#8217;ll start the intercept value off with 20 and the slope with 1.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\nintercept_start_value &lt;- 20\r\nslope_start_value &lt;- 1\r\n<\/pre>\n<p><em><strong>Gibbs Sampler Function<\/strong><\/em><\/p>\n<p>We will write a custom Gibbs sampler function to do all of the heavy lifting for us. I tried to comment out each step within the function so that it is clear what is going on. The function takes an x variable (the independent variable), a y variable (dependent variable), all of the priors that we specified, and the start values for the intercept and slope. The final two arguments of the function are the number of simulations you want to run and the burnin amount. The burnin amount, sometimes referred to as the wind up, is basically the number of simulations that you want to throw away as the model is working to converge. Usually you will be running several thousand simulations so you&#8217;ll throw away the first 1000-2000 simulations as the model is exploring the potential parameter space and settling in to something that is indicative of the data. The way the Gibbs sampler slowly starts to find the optimal parameters to define the data is by comparing the estimated result from the linear regression, after each new observation and updating of the posterior distribution, to the actual observed value, and then calculates the sum of squared error which continually adjusts our model sigma (tau).<\/p>\n<p>Each observation is indexed within the <strong>for()<\/strong> loop as row &#8220;i&#8221; and you&#8217;ll notice that the loop begins at row 2 and continues until the specified number of simulations are complete. Recall that the reason for starting at row 2 is because we have our starting values for our slope and intercept that kick off the loop and make the first prediction of bill length before the model starts updating (see the second code chunk within the loop).<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## gibbs sampler\r\ngibbs_sampler &lt;- function(x, y, intercept_prior_mu, intercept_prior_prec, slope_prior_mu, slope_prior_prec, tau_shape_prior, tau_rate_prior, intercept_start_value, slope_start_value, n_sims, burn_in){\r\n  \r\n  ## get sample size\r\n  n_obs &lt;- length(y)\r\n  \r\n  ## initial predictions with starting values\r\n  preds1 &lt;- intercept_start_value + slope_start_value * x\r\n  sse1 &lt;- sum((y - preds1)^2)\r\n  tau_shape &lt;- tau_shape_prior + n_obs \/ 2\r\n  \r\n  ## vectors to store values\r\n  sse &lt;- c(sse1, rep(NA, n_sims))\r\n  \r\n  intercept &lt;- c(intercept_start_value, rep(NA, n_sims))\r\n  slope &lt;- c(slope_start_value, rep(NA, n_sims))\r\n  tau_rate &lt;- c(NA, rep(NA, n_sims))\r\n  tau &lt;- c(NA, rep(NA, n_sims))\r\n  \r\n  for(i in 2:n_sims){\r\n    \r\n    # Tau Values\r\n    tau_rate&#x5B;i] &lt;- tau_rate_prior + sse&#x5B;i - 1]\/2\r\n    tau&#x5B;i] &lt;- rgamma(n = 1, shape = tau_shape, rate = tau_rate&#x5B;i]) \r\n    \r\n    # Intercept Values\r\n    intercept_mu &lt;- (intercept_prior_prec*intercept_prior_mu + tau&#x5B;i] * sum(y - slope&#x5B;i - 1]*x)) \/ (intercept_prior_prec + n_obs*tau&#x5B;i])\r\n    intercept_prec &lt;- intercept_prior_prec + n_obs*tau&#x5B;i]\r\n    intercept&#x5B;i] &lt;- rnorm(n = 1, mean = intercept_mu, sd = sqrt(1 \/ intercept_prec))\r\n    \r\n    # Slope Values\r\n    slope_mu &lt;- (slope_prior_prec*slope_prior_mu + tau&#x5B;i] * sum(x * (y - intercept&#x5B;i]))) \/ (slope_prior_prec + tau&#x5B;i] * sum(x^2))\r\n    slope_prec &lt;- slope_prior_prec + tau&#x5B;i] * sum(x^2)\r\n    slope&#x5B;i] &lt;- rnorm(n = 1, mean = slope_mu, sd = sqrt(1 \/ slope_prec))\r\n    \r\n    preds &lt;- intercept&#x5B;i] + slope&#x5B;i] * x\r\n    sse&#x5B;i] &lt;- sum((y - preds)^2)\r\n    \r\n  }\r\n  \r\n  list(\r\n    intercept = na.omit(intercept&#x5B;-1:-burn_in]), \r\n    slope = na.omit(slope&#x5B;-1:-burn_in]), \r\n    tau = na.omit(tau&#x5B;-1:-burn_in]))\r\n  \r\n}\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><em><strong>Run the Function<\/strong><\/em><\/p>\n<p>Now it is as easy as providing each argument of our function with all of the values specified above. I&#8217;ll run the function for 20,000 simulations and set the burnin value to 1,000.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\nsim_results &lt;- gibbs_sampler(x = bill_length,\r\n    y = bill_depth,\r\n    intercept_prior_mu = intercept_prior_mu,\r\n    intercept_prior_prec = intercept_prior_prec,\r\n    slope_prior_mu = slope_prior_mu,\r\n    slope_prior_prec = slope_prior_prec,\r\n    tau_shape_prior = tau_shape_prior,\r\n    tau_rate_prior = tau_rate_prior,\r\n    intercept_start_value = intercept_start_value,\r\n    slope_start_value = slope_start_value,\r\n    n_sims = 20000,\r\n    burn_in = 1000)\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><em><strong>Model Summary Statistics<\/strong><\/em><\/p>\n<p>The results from the function are returned as a list with an element for the simulated intercept, slope, and sigma values. We will summarize each by calculating the mean, standard deviation, and 90% Credible Interval. We can then compare what we obtained from our Gibbs Sampler to the results from our {<strong>rstanarm<\/strong>} model, which used Hamiltonian Monte Carlo (a different sampling approach).<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n## Extract summary stats\r\nintercept_posterior_mean &lt;- mean(sim_results$intercept, na.rm = TRUE)\r\nintercept_posterior_sd &lt;- sd(sim_results$intercept, na.rm = TRUE)\r\nintercept_posterior_cred_int &lt;- qnorm(p = c(0.05,0.95), mean = intercept_posterior_mean, sd = intercept_posterior_sd)\r\n\r\nslope_posterior_mean &lt;- mean(sim_results$slope, na.rm = TRUE)\r\nslope_posterior_sd &lt;- sd(sim_results$slope, na.rm = TRUE)\r\nslope_posterior_cred_int &lt;- qnorm(p = c(0.05,0.95), mean = slope_posterior_mean, sd = slope_posterior_sd)\r\n\r\nsigma_posterior_mean &lt;- mean(sqrt(1 \/ sim_results$tau), na.rm = TRUE)\r\nsigma_posterior_sd &lt;- sd(sqrt(1 \/ sim_results$tau), na.rm = TRUE)\r\nsigma_posterior_cred_int &lt;- qnorm(p = c(0.05,0.95), mean = sigma_posterior_mean, sd = sigma_posterior_sd)\r\n\r\n## Extract rstanarm values\r\nrstan_intercept &lt;- coef(fit)&#x5B;1]\r\nrstan_slope &lt;- coef(fit)&#x5B;2]\r\nrstan_sigma &lt;- 1.1\r\nrstan_cred_int_intercept &lt;- as.vector(posterior_interval(fit)&#x5B;1, ])\r\nrstan_cred_int_slope &lt;- as.vector(posterior_interval(fit)&#x5B;2, ])\r\nrstan_cred_int_sigma &lt;- as.vector(posterior_interval(fit)&#x5B;3, ])\r\n\r\n## Compare summary stats to the rstanarm model\r\n## Model Averages\r\nmodel_means &lt;- data.frame(\r\n  model = c(&quot;Gibbs&quot;, &quot;Rstan&quot;),\r\n  intercept_mean = c(intercept_posterior_mean, rstan_intercept),\r\n  slope_mean = c(slope_posterior_mean, rstan_slope),\r\n  sigma_mean = c(sigma_posterior_mean, rstan_sigma)\r\n)\r\n\r\n## Model 90% Credible Intervals\r\nmodel_cred_int &lt;- data.frame(\r\n  model = c(&quot;Gibbs Intercept&quot;, &quot;Rstan Intercept&quot;, &quot;Gibbs Slope&quot;, &quot;Rstan Slope&quot;, &quot;Gibbs Sigma&quot;,&quot;Rstan Sigma&quot;),\r\n  x5pct = c(intercept_posterior_cred_int&#x5B;1], rstan_cred_int_intercept&#x5B;1], slope_posterior_cred_int&#x5B;1], rstan_cred_int_slope&#x5B;1], sigma_posterior_cred_int&#x5B;1], rstan_cred_int_sigma&#x5B;1]),\r\n  x95pct = c(intercept_posterior_cred_int&#x5B;2], rstan_cred_int_intercept&#x5B;2], slope_posterior_cred_int&#x5B;2], rstan_cred_int_slope&#x5B;2], sigma_posterior_cred_int&#x5B;2], rstan_cred_int_sigma&#x5B;2])\r\n)\r\n\r\n## view tables\r\nmodel_means\r\nmodel_cred_int\r\n<\/pre>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.40.10-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-2547\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.40.10-PM.png\" alt=\"\" width=\"556\" height=\"297\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.40.10-PM.png 850w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.40.10-PM-300x160.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.40.10-PM-768x410.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.40.10-PM-624x333.png 624w\" sizes=\"auto, (max-width: 556px) 100vw, 556px\" \/><\/a><\/p>\n<p>Even though the two approaches use a different sampling method, the results are relatively close to each other.<\/p>\n<p><em><strong>Visual Comparisons of Posterior Distributions<\/strong><\/em><\/p>\n<p>Finally, we can visualize the posterior distributions between the two models.<\/p>\n<pre class=\"brush: r; title: ; notranslate\" title=\"\">\r\n# put the posterior simulations from the Gibbs sampler into a data frame\r\ngibbs_posteriors &lt;- data.frame( Intercept = sim_results$intercept, bill_length_mm = sim_results$slope, sigma = sqrt(1 \/ sim_results$tau) ) %&gt;%\r\n  pivot_longer(cols = everything()) %&gt;%\r\n  arrange(name) %&gt;%\r\n  mutate(name = factor(name, levels = c(&quot;Intercept&quot;, &quot;bill_length_mm&quot;, &quot;sigma&quot;)))\r\n\r\ngibbs_plot &lt;- gibbs_posteriors %&gt;%\r\n  ggplot(aes(x = value)) +\r\n  geom_histogram(fill = &quot;light blue&quot;,\r\n                 color = &quot;grey&quot;) +\r\n  facet_wrap(~name, scales = &quot;free_x&quot;) +\r\n  ggtitle(&quot;Gibbs Posterior Distirbutions&quot;)\r\n\r\n\r\nrstan_plot &lt;- plot(fit, &quot;hist&quot;) + \r\n  ggtitle(&quot;Rstan Posterior Distributions&quot;)\r\n\r\n\r\ngibbs_plot \/ rstan_plot\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.43.47-PM.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-2548\" src=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.43.47-PM-1024x802.png\" alt=\"\" width=\"625\" height=\"490\" srcset=\"https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.43.47-PM-1024x802.png 1024w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.43.47-PM-300x235.png 300w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.43.47-PM-768x601.png 768w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.43.47-PM-624x489.png 624w, https:\/\/optimumsportsperformance.com\/blog\/wp-content\/uploads\/2022\/07\/Screen-Shot-2022-07-07-at-11.43.47-PM.png 1954w\" sizes=\"auto, (max-width: 625px) 100vw, 625px\" \/><\/a><\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Wrapping<\/strong> <strong>Up<\/strong><\/span><\/p>\n<p>We created a simple function that runs a simple linear regression using Gibbs Sampling and found the results to be relatively similar to those from our {<strong>rstanarm<\/strong>} model, which uses a different algorithm and also had different prior specifications. It&#8217;s often not necessary to write your own function like this, but doing so can be a fun approach to learning a little bit about what is going on under the hood of some of the functions provided in the various R libraries you are using.<\/p>\n<p>The entire code can be accessed on my <span style=\"color: #0000ff;\"><strong><a style=\"color: #0000ff;\" href=\"https:\/\/github.com\/pw2\/R-Tips-Tricks\/blob\/master\/Bayesian%20Simple%20Linear%20Regression%20by%20Hand%20(Gibbs%20Sampler).Rmd\">GitHub page<\/a><\/strong><\/span>.<\/p>\n<p>Feel free to reach out if you notice any math or code errors.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Earlier this week, I briefly discussed a few ways of making various predictions from a Bayesian Regression Model. That article took advantage of the Bayesian scaffolding provided by the {rstanarm} package which runs {Stan} under the hood, to fit the model. As is often the case, when possible, I like to do a lot of [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[49,45,43],"tags":[],"class_list":["post-2545","post","type-post","status-publish","format-standard","hentry","category-bayesian-model-building","category-r-tips-tricks","category-sports-analytics"],"_links":{"self":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts\/2545","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/comments?post=2545"}],"version-history":[{"count":4,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts\/2545\/revisions"}],"predecessor-version":[{"id":2552,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/posts\/2545\/revisions\/2552"}],"wp:attachment":[{"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/media?parent=2545"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/categories?post=2545"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/optimumsportsperformance.com\/blog\/wp-json\/wp\/v2\/tags?post=2545"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}