This past week on the Wharton Moneyball Podcast, during Quarter 2 of the show the hosts got into a discussion about Bayesian methods, simulating uncertainty, and frequentist approaches to statistical analysis. The show hosts are all strong Bayesian advocates but at one point in the discussion, Eric Bradlow mentioned that “frequenstist can answer similar questions by building distributions from their model predictions.” (paraphrasing)
This comment reminded me of Chapter 7 in Gelman and Hill’s brilliant book, Data Analysis Using Regression and Multilevel/Hierarchical Models. In this chapter, the authors do what they call an informal Bayesian approach by simulating the predictions from a linear regression model. It’s an interesting (and easy) approach that can be helpful for getting a sense for how Bayesian posterior predictions work without building a full Bayesian model. I call it, “building a bridge to Bayes”.
Since the entire book is coded in R, I decided to code an example in Python.
The Jupyter notebook is accessible on my GITHUB page.
(Special Material: In the Jupyter Notebook I also include additional material on how to calculate prediction intervals and confidence intervals by hand in python. I wont go over those entire sections here as I don’t want the post to get too long.)
Libraries & Data
We will start by loading up the libraries that we need and the data. I’ll use the iris data set, since it is conveniently available from the sklearn library. We will build the regression model using the statsmodels.api library.
## import libraries from sklearn import datasets import pandas as pd import numpy as np import statsmodels.api as smf from scipy import stats import matplotlib.pyplot as plt ## iris data set iris = datasets.load_iris() ## convert to pandas sata frame data = iris['data'] target = iris['target'] iris_df = pd.DataFrame(data) iris_df.columns = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'] iris_df.head(3)
Build a linear regression model
Next, we build a simple ordinary least squares regression model to predict petal_width from petal_length.
## Set up our X and y variables X = iris_df['petal_length'] y = iris_df['petal_width'] # NOTE: statsmodels does not automatically add an intercept, so you need to do that manually X = smf.add_constant(X) # Build regression model # NOTE: the X and y variables are reversed in the function compared to sklearn fit_lm = smf.OLS(y, X).fit() # Get an R-like output of the model fit_lm.summary()
Simulate a distribution around the slope coefficient
The slope coefficient for theĀ petal_length variable, from the model output above, is 0.4158 with a standard error of 0.01. We will store these two values in their own variables and then use them to create a simulation of 10,000 samples and plot the distribution.
## get summary stats mu_slope = 0.4158 se_slope = 0.01 ## create simulation n_samples = 10000 coef_sim = np.random.normal(loc = mu_slope, scale = se_slope, size = n_samples) ## plot simulated distribution plt.hist(coef_sim, bins = 60) plt.show()
We can also grab the summary statistics from the simulated distribution. We will snag the mean and the 90% quantile interval.
## get summary stats from our simulation summary_stats = { 'Mean': coef_sim.mean(), 'Low90': np.quantile(coef_sim, 0.05), 'High90': np.quantile(coef_sim, 0.95) } summary_stats
Making a prediction on a new observation and building a posterior predictive distribution
Now that we’ve gotten a good sense for how to create a simulation in python, we can create a new observation of petal_length and make a prediction about what the petal_width would be based on our model. In addition, we will get the prediction intervals from the output and use them to calculate a standard error for the prediction, which we will use for the posterior predictive simulation.
Technical Note: In the prediction output you will see that python returns a mean_ci_lower and mean_ci_upper and an obs_ci_lower and obs_ci_higher. The latter two are the prediction intervalsĀ while the former two are the confidence intervals. I previously discussed the difference HERE and this is confirmed in the Jupyter Notebook where I calculate these values by hand.
# create a new petal_length observation new_dat = np.array([[1, 1.7]]) # add a 1 upfront to indicate the model intercept # make prediction of petal_width using the model prediction = fit_lm.get_prediction(new_dat) # get summary of the prediction prediction.summary_frame()
Store the predicted value (0.343709) and then calculate the standard error from the lower and upper prediction intervals. Run a simulation and then plot the distribution of predictions.
mu_pred = 0.343709 se_pred = 0.754956 - 0.343709 # subtract the upper prediction interval from the mean to get the variability n_sims = 10000 pred_obs = np.random.normal(loc = mu_pred, scale = se_pred, size = n_sims) plt.hist(pred_obs, bins = 60) plt.show()
Just as we did for the simulation of the slope coefficient we can extract our summary statistics (mean and 90% quantile intervals).
## get summary stats from our simulation summary_stats = { 'Mean': pred_obs.mean(), 'Low90': np.quantile(pred_obs, 0.05), 'High90': np.quantile(pred_obs, 0.95) } summary_stats
That is a pretty easy way to get a sense for approximating a Bayesian posterior predictive distribution. Rather than simply reporting the predicted value and a confidence interval or prediction interval, it is sometimes nice to build an entire distribution. Aside from it being visually appealing, it allows us to answer other questions we might have, for example, what percentage of the data is greater or less than 0.5 (or any other threshold value you might be interested in)?
As stated earlier, all of this code is accessible on my GITHUB page and the Jupyter notebook also has additional sections on how to calculate confidence intervals and prediction intervals by hand.
If you notice any errors, let me know!