BayesChange Tutorial

Here we provide a brief tutorial of the BayesChange package. The BayesChange package contains two main functions: one that performs change points detection on time series and epidemic diffusions and one that perform clustering of time series and epidemic diffusions with common change points. Here we briefly show how to implement these.

library(BayesChange)

Detecting change points

The function detect_cp provide a method for detecting change points, it is based on the work Martínez and Mena (2014) and on Corradin et al. (2024)

Depending on the structure of the data, detect_cp might perform change points detection on univariate time series or multivariate time series. We import dataset eu_inflation that contains the standardized monthly inflation rates from the Harmonized Index of Consumer Prices (HICP) for the 12 COICOP expenditure categories across European Union countries. The data span the period from February~1997 to December~2024 resulting in a matrix of \(12\) rows and \(355\) columns.

data("eu_inflation")

Now we can run the function detect_cp, as arguments of the function we need to specify the number of iterations, the number of burn-in steps and a list with the the autoregressive coefficient phi for the likelihood of the data, the parameters a, b, c for the priors and the probability q of performing a split at each step. Since we deal with time series we need also to specify kernel = "ts".

out <- detect_cp(data = eu_inflation[1,],
                 n_iterations = 2000, n_burnin = 500, q = 0.5,
                 params = list(prior_var_phi = 0.1, prior_delta_c = 1, prior_delta_d = 1), kernel = "ts")
#> Completed:   200/2000 - in 0.054698 sec
#> Completed:   400/2000 - in 0.103796 sec
#> Completed:   600/2000 - in 0.149979 sec
#> Completed:   800/2000 - in 0.197319 sec
#> Completed:   1000/2000 - in 0.268481 sec
#> Completed:   1200/2000 - in 0.361703 sec
#> Completed:   1400/2000 - in 0.444419 sec
#> Completed:   1600/2000 - in 0.499948 sec
#> Completed:   1800/2000 - in 0.549151 sec
#> Completed:   2000/2000 - in 0.596629 sec

With the methods print and summary we can get information about the algorithm.

print(out)
#> DetectCpObj object
#> Type: change points detection on univariate time series
summary(out)
#> DetectCpObj object
#> Change point detection summary:
#>  - Data: univariate time series
#>  - Burn-in iterations: 500 
#>  - MCMC iterations: 1500 
#>  - Average number of detected change points: 7.01 
#>  - Computational time: 0.6 seconds
#>  
#> Use plot() for a detailed visualization or posterior_estimate() to analyze the detected change points.

In order to get a point estimate of the change points we can use the method posterior_estimate that uses the method salso by David B. Dahl and Müller (2022) to get the final latent order and then detect the change points.

cp_est <- posterior_estimate(out, loss = "binder")
cumsum(table(cp_est))[-length(table(cp_est))] + 1
#>   1   2   3   4   5   6   7   8 
#>  42 202 241 300 306 321 322 326

The package also provides a method for plotting the change points.

plot(out, loss = "binder")

We can assess convergence of the latent order posterior chain, for example, by inspecting the traceplot of its log-likelihood with coda::traceplot.

coda::traceplot(out$lkl_MCMC, ylab = "Log-Likelihood")

If we instead consider a matrix of data, detect_cp automatically performs a multivariate change points detection method. We define the parameters.

params_multi <- list(m_0 = rep(0,3),
                     k_0 = 1,
                     nu_0 = 10,
                     S_0 = diag(0.1,3,3),
                     prior_var_phi = 0.1,
                     prior_delta_c = 1,
                     prior_delta_d = 1)

Arguments k_0, nu_0, phi_0, m_0, prior_delta_c, prior_delta_d and prior_var_phi correspond to the parameters of the prior distributions for the multivariate likelihood.

out <- detect_cp(data = eu_inflation[1:3,], n_iterations = 2000,
          n_burnin = 500, q = 0.5, params = params_multi, kernel = "ts")
#> Completed:   200/2000 - in 0.042717 sec
#> Completed:   400/2000 - in 0.088591 sec
#> Completed:   600/2000 - in 0.135588 sec
#> Completed:   800/2000 - in 0.181293 sec
#> Completed:   1000/2000 - in 0.226785 sec
#> Completed:   1200/2000 - in 0.272667 sec
#> Completed:   1400/2000 - in 0.3184 sec
#> Completed:   1600/2000 - in 0.364438 sec
#> Completed:   1800/2000 - in 0.410455 sec
#> Completed:   2000/2000 - in 0.454594 sec

table(posterior_estimate(out, loss = "binder"))
#> 
#>   1   2   3   4   5   6   7 
#>  42 134  22   2  78  22  36
plot(out, loss = "binder", plot_freq = TRUE)

Function detect_cp can also be used to detect change points on survival functions. We consider the synthetic dataset epi_synthetic

data("epi_synthetic")

To run detect_cp on epidemiological data we need to set kernel = "epi". Moreover, besides the usual parameters, we need to set the number of Monte Carlo replications M for the approximation of the integrated likelihood and the recovery rate xi. a0 and b0 are optional and correspond to the parameters of the gamma distribution for the integration of the likelihood.

params_epi <- list(M = 250, xi = 1/8, a0 = 4, b0 = 10, I0_var = 0.1)

out <- detect_cp(data = epi_synthetic, n_iterations = 2000, n_burnin = 500,
                 q = 0.25, params = params_epi, kernel = "epi")
#> Completed:   200/2000 - in 1.84128 sec
#> Completed:   400/2000 - in 3.62603 sec
#> Completed:   600/2000 - in 5.41448 sec
#> Completed:   800/2000 - in 7.19874 sec
#> Completed:   1000/2000 - in 8.98498 sec
#> Completed:   1200/2000 - in 10.7708 sec
#> Completed:   1400/2000 - in 12.5573 sec
#> Completed:   1600/2000 - in 14.4121 sec
#> Completed:   1800/2000 - in 16.2441 sec
#> Completed:   2000/2000 - in 18.0292 sec

print(out)
#> DetectCpObj object
#> Type: change points detection on an epidemic diffusion

Also here, with function plot we can plot the survival function and the position of the change points.

plot(out)

Clustering time dependent data with common change points

BayesChange contains another function, clust_cp, that cluster respectively univariate and multivariate time series and survival functions with common change points. Details about this methods can be found in Corradin et al. (2026)

In clust_cp the argument kernel must be specified, if data are time series then kernel = "ts" must be set. Then the algorithm automatically detects if data are univariate or multivariate.

We consider for this example dataset stock_uni that contains the daily mean stock prices for the 50 largest companies (by market capitalization) in the Standard&Poor’s 500 Index from January 1, 2020 to January 1, 2022.

data("stock_uni")

Arguments that need to be specified in clust_cp are the number of iterations n_iterations, the number of elements in the normalisation constant B, the split-and-merge step L performed when a new partition is proposed and a list with the parameters of the algorithm, the likelihood and the priors..

params_uni <- list(a = 1,
                   b = 1,
                   c = 1,
                   phi = 0.1)

out <- clust_cp(data = stock_uni[1:5,], n_iterations = 2000, n_burnin = 500,
                L = 1, q = 0.5, B = 1000, params = params_uni, kernel = "ts")
#> Normalization constant - completed:  100/1000 - in 0.052958 sec
#> Normalization constant - completed:  200/1000 - in 0.096797 sec
#> Normalization constant - completed:  300/1000 - in 0.150875 sec
#> Normalization constant - completed:  400/1000 - in 0.202558 sec
#> Normalization constant - completed:  500/1000 - in 0.246726 sec
#> Normalization constant - completed:  600/1000 - in 0.291481 sec
#> Normalization constant - completed:  700/1000 - in 0.340594 sec
#> Normalization constant - completed:  800/1000 - in 0.384945 sec
#> Normalization constant - completed:  900/1000 - in 0.442244 sec
#> Normalization constant - completed:  1000/1000 - in 0.493017 sec
#> 
#> ------ MAIN LOOP ------
#> 
#> Completed:   200/2000 - in 0.651343 sec
#> Completed:   400/2000 - in 1.20004 sec
#> Completed:   600/2000 - in 1.7687 sec
#> Completed:   800/2000 - in 2.3022 sec
#> Completed:   1000/2000 - in 2.94061 sec
#> Completed:   1200/2000 - in 3.48569 sec
#> Completed:   1400/2000 - in 3.97329 sec
#> Completed:   1600/2000 - in 4.41385 sec
#> Completed:   1800/2000 - in 4.85733 sec
#> Completed:   2000/2000 - in 5.37571 sec

posterior_estimate(out, loss = "binder")
#> [1] 1 2 3 1 2

Method plot for clustering univariate time series represents the data colored according to the assigned cluster.

plot(out, loss = "binder")

Method plot_psm shows the posterior similarity matrix of the clustering. Selecting reorder = TRUE we can choose to order the matrix depending on the clustering obtained.

plot_psm(out, reorder = TRUE)

If time series are multivariate, data must be an array, where each element is a multivariate time series represented by a matrix. Each row of the matrix is a component of the time series. Here we use dataset stock_multi that contains for each company the daily opening and closing stock prices.

data("stock_multi")
params_multi <- list(m_0 = rep(0,2),
                     k_0 = 1,
                     nu_0 = 10,
                     S_0 = diag(1,2,2),
                     phi = 0.1)

out <- clust_cp(data = stock_multi[,,1:5], n_iterations = 2500, n_burnin = 500,
                L = 1, B = 1000, params = params_multi, kernel = "ts")
#> Normalization constant - completed:  100/1000 - in 0.012173 sec
#> Normalization constant - completed:  200/1000 - in 0.024415 sec
#> Normalization constant - completed:  300/1000 - in 0.036684 sec
#> Normalization constant - completed:  400/1000 - in 0.049018 sec
#> Normalization constant - completed:  500/1000 - in 0.061448 sec
#> Normalization constant - completed:  600/1000 - in 0.073816 sec
#> Normalization constant - completed:  700/1000 - in 0.086113 sec
#> Normalization constant - completed:  800/1000 - in 0.098562 sec
#> Normalization constant - completed:  900/1000 - in 0.110974 sec
#> Normalization constant - completed:  1000/1000 - in 0.123239 sec
#> 
#> ------ MAIN LOOP ------
#> 
#> Completed:   250/2500 - in 0.30147 sec
#> Completed:   500/2500 - in 0.619631 sec
#> Completed:   750/2500 - in 0.941213 sec
#> Completed:   1000/2500 - in 1.25477 sec
#> Completed:   1250/2500 - in 1.57054 sec
#> Completed:   1500/2500 - in 1.88314 sec
#> Completed:   1750/2500 - in 2.19978 sec
#> Completed:   2000/2500 - in 2.51375 sec
#> Completed:   2250/2500 - in 2.83197 sec
#> Completed:   2500/2500 - in 3.15078 sec

posterior_estimate(out, loss = "binder")
#> [1] 1 2 3 1 2
plot(out, loss = "binder")

Finally, if we set kernel = "epi", clust_cp cluster survival functions with common change points. Also here details can be found in Corradin et al. (2026)

Data are a matrix where each row is the number of infected at each time. Inside this package is included the dataset epi_synthetic_multi with multivariate synthetic epidemic diffusions.

data("epi_synthetic_multi")

params_epi <- list(M = 100, xi = 1/8,
                   alpha_SM = 1,
                   a0 = 4,
                   b0 = 10,
                   I0_var = 0.1,
                   avg_blk = 2)

out <- clust_cp(epi_synthetic_multi[,10:150], n_iterations = 2000, n_burnin = 500,
                L = 1, B = 1000, params = params_epi, kernel = "epi")

posterior_estimate(out, loss = "binder")
plot(out, loss = "binder")
Corradin, Riccardo, Luca Danese, Wasiur R. KhudaBukhsh, and Andrea Ongaro. 2024. “Model-Based Clustering of Time-Dependent Observations with Common Structural Changes.” https://arxiv.org/abs/2410.09552.
———. 2026. “Model-Based Clustering of Time-Dependent Observations with Common Structural Changes.” Statistics and Computing 36 (1): 7. https://doi.org/10.1007/s11222-025-10756-x.
David B. Dahl, Devin J. Johnson, and Peter Müller. 2022. “Search Algorithms and Loss Functions for Bayesian Clustering.” Journal of Computational and Graphical Statistics 31 (4): 1189–1201. https://doi.org/10.1080/10618600.2022.2069779.
Martínez, Asael Fabian, and Ramsés H. Mena. 2014. On a Nonparametric Change Point Detection Model in Markovian Regimes.” Bayesian Analysis 9 (4): 823–58. https://doi.org/10.1214/14-BA878.