Bayesian Time Series Analysis with bsts
Posted onBayesian methods are where we need to go. I have pretty strong opinions on this as Bayes provides a way to attune for our prior understanding of the world, move away from null hypothesis testing and take advantage of prior work. Additionally, hierarchical modeling provides a way to collectively pool strength when you have a smaller number of data points. All in all it is a great way to do analysis.
When working with time series analysis, the benefits of Bayesian methods are also great. Bayes provides a way to use an hierarchical modeling approach, and assign probabilities to the outcomes. This super direct way of using uncertainity makes it a nice foil to 95% confidence intervals.
I have take the examples from StichFix’s blog where they contrast the methods of ARIMA and Bayesian Structural Times Series Modeling.
Arima
Autoregressive integrated moving average modeling technique with differencing = 1, and moving average term of 1.
### Load the data
Y <-
### Fit the ARIMA model
arima <-
### Actual versus predicted
d1 <-
<-
### MAPE (mean absolute percentage error)
MAPE <- %>%
### Plot actual versus predicted
+
+
+
+ +
+ +
+
+
Bayesian Structural Model
A different approach would be to use a Bayesian structural time series model with unobserved components. This technique is more transparent than ARIMA models and deals with uncertainty in a more elegant manner. It is more transparent because its representation does not rely on differencing, lags and moving averages. You can visually inspect the underlying components of the model. It handles uncertainty in a better way because you can quantify the posterior uncertainty of the individual components, control the variance of the components, and impose prior beliefs on the model. Last, but not least, any ARIMA model can be recast as a structural model.
The model form thus takes:
Here xt denotes a set of regressors, St represents seasonality, and μt is the local level term. The local level term defines how the latent state evolves over time and is often referred to as the unobserved trend. This could, for example, represent an underlying growth in the brand value of a company or external factors that are hard to pinpoint, but it can also soak up short term fluctuations that should be controlled for with explicit terms. Note that the regressor coefficients, seasonality and trend are estimated simultaneously, which helps avoid strange coefficient estimates due to spurious relationships (similar in spirit to Granger causality, see 1). In addition, due to the Bayesian nature of the model, we can shrink the elements of β to promote sparsity or specify outside priors for the means in case we’re not able to get meaningful estimates from the historical data (more on this later).
### Load the data
Y <-
y <-
### Run the bsts model
ss <-
ss <-
bsts.model <-
### Get a suggested number of burn-ins
burn <-
### Predict
p <-
### Actual versus predicted
d2 <-
<-
### MAPE (mean absolute percentage error)
MAPE <- %>%
### 95% forecast credible interval
posterior.interval <-
<-
### Join intervals to the forecast
d3 <-
### Plot actual versus predicted with credible intervals for the holdout period
+
+
+
+ + + +
+
+
+
credible.interval <-
<-
credible.interval %>% knitr::
p75 | Median | p25 | Date |
---|---|---|---|
438 | 425 | 414 | 1960-01-01 |
422 | 408 | 397 | 1960-02-01 |
496 | 478 | 457 | 1960-03-01 |
476 | 461 | 443 | 1960-04-01 |
491 | 472 | 451 | 1960-05-01 |
557 | 536 | 509 | 1960-06-01 |
632 | 600 | 572 | 1960-07-01 |
647 | 614 | 573 | 1960-08-01 |
547 | 513 | 478 | 1960-09-01 |
492 | 461 | 433 | 1960-10-01 |
435 | 410 | 382 | 1960-11-01 |
489 | 458 | 425 | 1960-12-01 |
You can also extract the different components
### Set up the model
Y <-
y <-
ss <-
ss <-
bsts.model <-
### Get a suggested number of burn-ins
burn <-
### Extract the components
components <-
<-
components <-
<-
### Plot
+ +
+ + + +
+ +
And of course use Bayes for variable selection using the spike and slab method (I was trained to call it stochastic variable selection…but whatever)
### Fit the model with regressors
ss <-
ss <-
bsts.reg <-
### Get the number of burn-ins to discard
burn <-
### Helper function to get the positive mean of a vector
{
b <- b
if ( > 0)
return()
return(0)
}
### Get the average coefficients when variables were selected (non-zero slopes)
coeff <-
coeff$Variable <-
+
+
+
+
+ +
### Inclusion probabilities -- i.e., how often were the variables selected
inclusionprobs <-
inclusionprobs$Variable <-
+
+
+
+
+ +
### Fit the model with regressors
ss <-
ss <-
bsts.reg <-
### Get the number of burn-ins to discard
burn <-
### Get the components
components.withreg <-
<-
components.withreg <-
<-
+ +
+ + + +
+ +
And of course adding priors
<-
prior.mean <-
### Helper function to get the positive mean of a vector
{
b <- b
if ( > 0)
return()
return(0)
}
### Set up the priors
prior <-
### Run the bsts model with the specified priors
ss <-
ss <-
bsts.reg.priors <-
### Get the average coefficients when variables were selected (non-zero slopes)
coeff <-
coeff$Variable <-
+
+
+
+
prior.spikes
So with all of this said we can build Bayesian structural time series models easily, adding prior information, use variable selection and accurately indicate our uncertainity about future predictions. This is pretty powerful stuff. It is also why these techniques are used for anomaly detection and causal inference.
Citation
BibTex citation:
@online{dewitt2018
author = {Michael E. DeWitt},
title = {Bayesian Time Series Analysis with bsts},
date = 2018-07-05,
url = {https://michaeldewittjr.com/articles/2018-07-05-bayesian-time-series-analysis-with-bsts},
langid = {en}
}
For attribution, please cite this work as:
Michael E. DeWitt. 2018. "Bayesian Time Series Analysis with bsts." July 5, 2018. https://michaeldewittjr.com/articles/2018-07-05-bayesian-time-series-analysis-with-bsts