Computational Models of Behavior, part 2

Nov 13, 2021·
Ruslan Klymentiev
Ruslan Klymentiev
· 16 min read
Image credit: geralt from Pixabay

Recap on the Previous Part

In the previous part we saw an example of fitting a model to behavioral data using maximum likelihood estimation (MLE). The original data set was artificially generated using the modification of a Q-learning model (Seymour et al., 20121) for a 4-armed bandit paradigm, where each option had a unique probability of reward and punishment. We went over model comparison, model validation, and how to finally answer a research question using the fitted parameters. We have also discussed that MLE comes with its limitations, such as the possibility that the algorithm will be stuck at the local minimum, meaning that the fitted parameters may not be optimal.

In this part we are going to look at another approach, Bayesian inference. And particular we will use the hBayesDM: hierarchical Bayesian modeling of Decision-Making tasks package (Ahn et al, 20172), which was built on top of the RStan interface (Stan Development Team, 20203) for Bayesian data analysis. We are going to use the same data set and will try to answer the same question: “Does treatment affect humans’ reward sensitivity and the learning rate for a positive outcome?”.

Even though there is a Python version of hBayesDM package, we will do it in R, since I personally prefer to perform “hardcore statistical analysis” in R.

And once again, these posts have been designed to show an example of how to fit the models. Reader can refer to Gelman et al. (2013)4 for the in-depth overview of Bayesian inference.

Objectives:

  1. How to answer research questions using modeling.
  2. How to fit reinforcement learning (RL) models to behavioral data using Bayesian inference.
  3. How to compare models and choose the best fitting one.
  4. Compassion of MLE and Bayesian inference results.

Just to remind ourselves, that’s how the initial data set and the distribution of true parameters (which have been used to generate the data) look like:

Code

1library(hBayesDM)
2library(tidyverse)
3library(knitr)
4library(kableExtra)
5library(gridExtra)
6library(grid)
1agent_data <- read.csv("agent_data.csv", sep = ",")
2true_params <- read.csv("true_params.csv", sep = ",")
1head(agent_data, n = 10) %>% 
2  kable(caption = "Initial data sample") %>% 
3  kable_classic_2()


Table 1: Sample of the initial data
groupsubjIDtrialchoicegainloss
Control1110-1
Control1210-1
Control13310
Control14310
Control1510-1
Control1631-1
Control17400
Control18300
Control1911-1
Control110410
Code

 1p1 <- ggplot(true_params, aes(y = alpha_rew, x = group, fill = group)) +
 2  geom_boxplot() +
 3  geom_jitter(position=position_jitter(0.1)) +
 4  theme(legend.position = "none") +
 5  labs(
 6    y = expression(alpha[rew]),
 7    x = "") +
 8  ylim(0, .55)
 9
10p2 <- ggplot(true_params, aes(y = alpha_pun, x = group, fill = group)) +
11  geom_boxplot() +
12  geom_jitter(position=position_jitter(0.1)) +
13  theme(legend.position = "none") +
14  labs(
15    y = expression(alpha[pun]),
16    x = "") +
17  ylim(0, .55)
18
19p3 <- ggplot(true_params, aes(y = R, x = group, fill = group)) +
20  geom_boxplot() +
21  geom_jitter(position=position_jitter(0.1)) +
22  theme(legend.position = "none") +
23  labs(
24    y = expression(R),
25    x = "Group") +
26  ylim(1, 8)
27
28p4 <- ggplot(true_params, aes(y = P, x = group, fill = group)) +
29  geom_boxplot() +
30  geom_jitter(position=position_jitter(0.1)) +
31  theme(legend.position = "none") +
32  labs(
33    y = expression(P),
34    x = "Group") +
35  ylim(1, 8)
36
37
38grid.arrange(
39  p1, p2, p3, p4, nrow = 2,
40  top = textGrob("True Parameters", gp = gpar(font=2)))

Intuition Behind Bayesian Analysis

The ideas behind MLE and Bayesian analysis (BA) are conceptually different. In MLE we are looking for a set of parameters $\hat{\Theta}$ that maximize the likelihood function given the observed data $X = \{ x_1, x_2, ..., x_T \}$ for a specific model:

$$\hat{\Theta}_{ML} = \underset{\Theta}{\text{arg max}} \left \{ \mathcal{L} (\Theta | X) \right \} = \underset{\Theta}{\text{arg max}} \left \{ \prod _{i=1}^{T} P(x_i | \Theta) \right \}$$

$T$ - number of observations (trials).

In BA we are looking for a probability of the set of parameters $\hat{\Theta}$ of the model, given the data $X$. In a formal way:

$$P(\Theta|X) = \frac{P(X|\Theta) \cdot P(\Theta)}{P(X)}$$
  • $P(X|\Theta)$ is the likelihood, or probability that the observed data would happen given the set of parameters.
  • $P(\Theta)$ is the prior probability of a model.
  • $P(X)$ is the probability (or evidence) of a given data. It can be also called normalizing constant since it assures that the posterior probability function sums up to one.
  • $P(\Theta|X)$ is the posterior probability of the hypothesis given the observed data.

It is difficult (or even impossible) to find the exact value of $P(X)$, that’s why the equation above is simplified to the following equation:

$$P(\Theta|X) \propto P(X|\Theta) \cdot P(\Theta)$$

Over the last decays, several algorithms have been developed to draw samples from the posterior distribution based on this equation. In particular, rstan package uses Hamiltonian Monte Carlo algorithm, which is a part of a Markov chain Monte Carlo family of algorithms.

Also, as the name of the package suggests, hBayesDM uses the hierarchical Bayesian approach to fit the models. In our case, this means that each subject will have a unique set of parameters $\Theta$, or in other words, we allow the within-subject variation.

The benefit of hBayesDM is that there is a big variety of models predefined for different paradigms, so we don’t have to write them from scratch. Stan files with the model specifications can be found here.

Let us fit the model introduced by Seymour et al. (2012) with the help of bandit4arm_4par function. Note, that we are going to fit the model separately for control and treatment groups since at the end we are interested in parameters comparison between the groups.

Based on the model specification, that’s how the model structure looks like:

  • $m$, $\sigma$ - hyperparameters of the model that will affect the parameters $ \left \{ \alpha^{\text{reward}}, \alpha^{\text{punishment}}, R, P \right \}$ for each subject;
  • $S$ - number of subjects;
  • $y$ - choice data.

Priors:

$$\mu \sim \mathcal{N} (\mu = 0, \sigma=1)$$$$\sigma \sim \mathcal{N} (\mu = 0, \sigma=0.2)$$$$\alpha^{\text{reward}}_{\text{prior}} \sim \mathcal{N} (\mu = 0, \sigma=1)$$$$\alpha^{\text{punishment}}_{\text{prior}} \sim \mathcal{N} (\mu = 0, \sigma=1)$$$$R_{\text{prior}} \sim \mathcal{N} (\mu = 0, \sigma=1)$$$$P_{\text{prior}} \sim \mathcal{N} (\mu = 0, \sigma=1)$$

Transformation of parameters:

$$\alpha^{\text{reward}} = \Phi (\mu + \sigma \alpha^{\text{reward}}_{\text{prior}}) $$$$\alpha^{\text{punishment}} = \Phi (\mu + \sigma \alpha^{\text{punishment}}_{\text{prior}}) $$$$R = \Phi (\mu + \sigma R_{\text{prior}}) \cdot 30 $$$$P = \Phi (\mu + \sigma P_{\text{prior}}) \cdot 30$$

$\Phi(\cdot)$ is the standard normal cumulative distribution function, which can take any value in a $ \left [ 0, 1 \right ]$ range. By multiplying $R$ and $P$ by $30$ we allow sensitivity values to be in a $ \left [ 0, 30 \right ]$ range.

 1# model fit
 2# control group
 3output_ctr <- bandit4arm_4par(
 4  data = filter(agent_data, group == "Control"), # subset of the data
 5  niter = 4000, # amount of total itirations
 6  nwarmup = 1000, # amount of initial itirations. 
 7  # warmup trials will not be included in the final model
 8  nchain = 4, # amount of Markov chains
 9  ncore = 2, # amount machine's cores to use
10  inc_postpred = TRUE # include simulated posterior data
11)
12
13# treatment group
14output_trt <- bandit4arm_4par(
15  data = filter(agent_data, group == "Treatment"),
16  niter = 4000, nwarmup = 1000, nchain = 4,
17  ncore = 2, inc_postpred = TRUE)

Model Diagnostics

Once the models have been fitted it is important to check for convergence of MCMC algorithm. In a simple way, one could check it visually. According to the hBayesDM recommendations, the plot of chains’ values should look like “furry caterpillars”. In a bit more technical terms, we want to check whether values are centered around a stationary point without any change of pattern (e.g., increase of mean).

Control (CTR) Group:

1plot(output_ctr, type = "trace", inc_warmup = TRUE)

Treatment (TRT) Group:

1plot(output_trt, type = "trace", inc_warmup = TRUE)

Note, that these plots show the chain values for parameters’ posterior distribution for the group, not the individual posterior distributions.

Another approach for a convergence check is the R-hat metric that compares the between- and within-chain estimates for model parameters (Vehtari et al., 20195). The good value of $\hat{R}$ is considered to be under 1.05.

CTR Group:

1rhat(output_ctr, less = 1.01)
# TRUE: All Rhat values are less than 1.01

TRT Group:

1rhat(output_trt, less = 1.01)
# TRUE: All Rhat values are less than 1.01

As we can see, our model seems to converge for both of the groups.

Model Comparison

As discussed in part 1, it is always a good strategy to compare several models, since, as we know, all models are wrong. hBayesDM package makes model comparison really easy.

For simplicity we will add two models for comparison:

bandit4arm_lapse: model with 4 parameters ($\alpha^{\text{reward}}$, $\alpha^{\text{punishment}}$, $R$, $P$) and noise ($\epsilon$).

Model algorithm:

  • for each step $t$ in episode do
    • $\scriptsize Q(a_j)_t \leftarrow Q(a_j)^{\text{reward}}_t + Q(a_j)^{\text{punishment}}_t$
    • Select action $a_t$ using softmax policy $\scriptsize P(a_j = a_t) = \frac{e^{Q_j}}{\sum_{j}e^{Q_j}} \left( (1- \epsilon) + \frac{\epsilon}{J} \right) $
    • Observe reward $r_t$ and punishment $p_t$ values
    • for each action $a_j$ in possible actions do
      • if $a_j$ == $a_t$ do
        • $\scriptsize Q(a_j)^{\text{reward}}_{t+1} - Q(a_j)^{\text{reward}}_t + \alpha^{\text{reward}} \left( R \cdot r_t - Q(a_j)^{\text{reward}}_t \right)$
        • $\scriptsize Q(a_j)^{\text{punishment}}_{t+1} \leftarrow Q(a_j)^{\text{punishment}}_t + \alpha^{\text{punishment}} \left( P \cdot p_t - Q(a_j)^{\text{punishment}}_t \right)$
      • else
        • $\scriptsize Q(a_j)^{\text{reward}}_{t+1} \leftarrow Q(a_j)^{\text{reward}}_t + \alpha^{\text{reward}} \left( - Q(a_j)^{\text{reward}}_t \right)$
        • $\scriptsize Q(a_j)^{\text{punishment}}_{t+1} \leftarrow Q(a_j)^{\text{punishment}}_t + \alpha^{\text{punishment}} \left( - Q(a_j)^{\text{punishment}}_t \right)$
      • end
    • end
  • end

bandit4arm_singleA_lapse: model with the single learning rate for both reward and punishment ($\alpha$) and noise.

Model algorithm:

  • for each step $t$ in episode do
    • $\scriptsize Q(a_j)_t \leftarrow Q(a_j)^{\text{reward}}_t + Q(a_j)^{\text{punishment}}_t$
    • Select action $a_t$ using softmax policy $\scriptsize P(a_j = a_t) = \frac{e^{Q_j}}{\sum_{j}e^{Q_j}} \left( (1- \epsilon) + \frac{\epsilon}{J} \right) $
    • Observe reward $r_t$ and punishment $p_t$ values
    • for each action $a_j$ in possible actions do
      • if $a_j$ == $a_t$ do
        • $\scriptsize Q(a_j)^{\text{reward}}_{t+1} - Q(a_j)^{\text{reward}}_t + \alpha \left( R \cdot r_t - Q(a_j)^{\text{reward}}_t \right)$
        • $\scriptsize Q(a_j)^{\text{punishment}}_{t+1} \leftarrow Q(a_j)^{\text{punishment}}_t + \alpha \left( P \cdot p_t - Q(a_j)^{\text{punishment}}_t \right)$
      • else
        • $\scriptsize Q(a_j)^{\text{reward}}_{t+1} \leftarrow Q(a_j)^{\text{reward}}_t + \alpha \left( - Q(a_j)^{\text{reward}}_t \right)$
        • $\scriptsize Q(a_j)^{\text{punishment}}_{t+1} \leftarrow Q(a_j)^{\text{punishment}}_t + \alpha \left( - Q(a_j)^{\text{punishment}}_t \right)$
      • end
    • end
  • end

Leave-one-out cross-validation information criterion (LOOIC) can be used for model selection (Vehtari et al., 20176). The lower the value, the better the fit.

Code

 1# control group
 2# model with noise
 3output_ctr_lapse <- bandit4arm_lapse(
 4  data = filter(agent_data, group == "Control"),
 5  niter = 4000, nwarmup = 1000, nchain = 4,
 6  ncore = 2, inc_postpred = TRUE)
 7
 8# model with single learning rate
 9output_ctr_singleA <- bandit4arm_singleA_lapse(
10  data = filter(agent_data, group == "Control"),
11  niter = 4000, nwarmup = 1000, nchain = 4,
12  ncore = 2, inc_postpred = TRUE)
13
14# treatment group
15# model with noise
16output_trt_lapse <- bandit4arm_lapse(
17  data = filter(agent_data, group == "Treatment"),
18  niter = 4000, nwarmup = 1000, nchain = 4,
19  ncore = 3, inc_postpred = TRUE)
20
21# model with single learning rate
22output_trt_singleA <- bandit4arm_singleA_lapse(
23  data = filter(agent_data, group == "Treatment"),
24  niter = 4000, nwarmup = 1000, nchain = 4,
25  ncore = 3, inc_postpred = TRUE)
 1# compare models for CTR group
 2df <- printFit(output_ctr, output_ctr_lapse, output_ctr_singleA, ic = "looic") 
 3df %>% 
 4  kable(caption = "Model comparison for CTR group") %>% 
 5  kable_classic_2() %>%
 6  row_spec(
 7    which(df$LOOIC == min(df$LOOIC)), bold = T,
 8    background = "#d6f5d6")
 9
10# compare models for TRT group  
11df <- printFit(output_trt, output_trt_lapse, output_trt_singleA, ic = "looic") 
12df %>% 
13  kable(caption = "Model comparison for TRT group") %>% 
14  kable_classic_2() %>%
15  row_spec(
16    which(df$LOOIC == min(df$LOOIC)), bold = T,
17    background = "#d6f5d6")


Table 2: Model comparison for CTR group
ModelLOOICLOOIC Weights
bandit4arm_4par3292.8450.5553943
bandit4arm_lapse3295.3770.1565749
bandit4arm_singleA_lapse3294.1580.2880308
Table 3: Model comparison for TRT group
ModelLOOICLOOIC Weights
bandit4arm_4par4261.8200.8337282
bandit4arm_lapse4265.0670.1644656
bandit4arm_singleA_lapse4274.0900.0018062

bandit4arm_4par model showed the best results for both of the groups, so we can stick to it.

If the model with the single learning rate (bandit4arm_singleA_lapse) showed better performance, that would be a bummer, we wouldn’t be able to answer our research questions and that would indicate that our initial hypothesis was not framed correctly (perhaps there is no competition between positive and negative feedback).

Model Validation

As we did in part 1, we should also validate that model can recapture the actual behavior. You may have noticed that we had the argument inc_postpred = TRUE for all the model fitting functions. This included the predicted data for choice values for each of the chains. It is also called posterior predictive checks (PPC).

The plots below represent the actual choice data and the average of predicted values from all chains.

Code

 1# code source: https://ccs-lab.github.io/hBayesDM/articles/getting_started.html#7-posterior-predictive-checks
 2y_pred_mean <- apply(output_ctr$parVals$y_pred, c(2,3), mean)
 3
 4numSubjs = dim(output_ctr$allIndPars)[1]  # number of subjects
 5
 6subjList = unique(output_ctr$rawdata$subjID)  # list of subject IDs
 7maxT = mean(table(output_ctr$rawdata$subjID))  # maximum number of trials
 8true_y = array(NA, c(numSubjs, maxT)) # true data (`true_y`)
 9
10## true data for each subject
11for (i in 1:numSubjs) {
12  tmpID = subjList[i]
13  tmpData = subset(output_ctr$rawdata, subjID == tmpID)
14  true_y[i, ] = tmpData$choice  # only for data with a 'choice' column
15}
16
17df <- data.frame()
18for (i in 1:numSubjs) {
19  choice <- c(true_y[i, ], y_pred_mean[i, ])
20  type <- c(rep("True", 200), rep("PPC", 200))
21  trial <- c(1:200, 1:200)
22  subjID <- rep(paste0("Subject ", subjList[i]), 400)
23  temp_df <- data.frame(subjID, trial, choice, type)
24  df <- bind_rows(df, temp_df)
25}
26
27ggplot(data = df, aes(x = trial, y = choice, color = type)) +
28  geom_line() +
29  facet_wrap(~subjID, nrow = 5) +
30  labs(title = "Model Validation",
31       subtitle = "CTR Group")

Code

 1y_pred_mean <- apply(output_trt$parVals$y_pred, c(2,3), mean)
 2
 3numSubjs = dim(output_trt$allIndPars)[1]  # number of subjects
 4
 5subjList = unique(output_trt$rawdata$subjID)  # list of subject IDs
 6maxT = mean(table(output_trt$rawdata$subjID))  # maximum number of trials
 7true_y = array(NA, c(numSubjs, maxT)) # true data (`true_y`)
 8
 9## true data for each subject
10for (i in 1:numSubjs) {
11  tmpID = subjList[i]
12  tmpData = subset(output_trt$rawdata, subjID == tmpID)
13  true_y[i, ] = tmpData$choice  # only for data with a 'choice' column
14}
15
16df <- data.frame()
17for (i in 1:numSubjs) {
18  choice <- c(true_y[i, ], y_pred_mean[i, ])
19  type <- c(rep("True", 200), rep("PPC", 200))
20  trial <- c(1:200, 1:200)
21  subjID <- rep(paste0("Subject ", subjList[i]), 400)
22  temp_df <- data.frame(subjID, trial, choice, type)
23  df <- bind_rows(df, temp_df)
24}
25
26
27ggplot(data = df, aes(x = trial, y = choice, color = type)) +
28  geom_line() +
29  facet_wrap(~subjID, nrow = 5) +
30  labs(title = "Model Validation",
31       subtitle = "TRT Group")

Since PPC represent the average value, the values are not exactly in a $[1, 2, 3, 4 ]$ set. However, we can see that trend of the PPC line follows the line of actual selected values pretty accurately.

Groups Comparison

Now we can proceed to the most interesting part for us, which is the comparison of the parameter values between two groups. In the Bayesian framework, we work with the whole distribution of posterior distribution of the parameters, not just the point estimates (PE). Here is how the group posterior distributions look like:

CTR group:

1plot(output_ctr)

TRT group:

1plot(output_trt)

To compare the parameters we will look at the difference between posterior distributions (CTR - TRT). If the proportion of values in the difference distribution is much higher (or much less) than $\approx 0.5$, then the difference between the groups is credible. You can think about this in a way that if there was no effect of the treatment, then on average difference values would be centered around 0 and if not, then values would be much different than 0. We can also use the highest density interval (HDI) to address the uncertainty of the values.

Learning Rate (Reward)

Code

1post_diff <- output_ctr$parVals$mu_Arew - output_trt$parVals$mu_Arew
2plotHDI(
3  post_diff, 
4  Title = "Difference in posterior distributions (CTR - TRT)", 
5  xLab = expression(alpha[rew_diff]))
1print(paste0(round(mean(post_diff > 0) * 100, 1), 
2             "% of difference values are greater than 0."))

95% HDI: Lower bound = 0.1357, Upper bound = 0.3025.

100% of difference values are greater than 0.

Learning Rate (Punishment)

Code

1post_diff <- output_ctr$parVals$mu_Apun - output_trt$parVals$mu_Apun
2plotHDI(
3  post_diff, 
4  Title = "Difference in posterior distributions (CTR - TRT)", 
5  xLab = expression(alpha[pun_diff]))
1print(paste0(round(mean(post_diff > 0) * 100, 1), 
2             "% of difference values are greater than 0."))

95% HDI: Lower bound = -0.1, Upper bound = 0.1408.

64.7% of difference values are greater than 0.

Reward Sensitivity

Code

1post_diff <- output_ctr$parVals$mu_R - output_trt$parVals$mu_R
2plotHDI(
3  post_diff, 
4  Title = "Difference in posterior distributions (CTR - TRT)", 
5  xLab = expression(R[diff]))
1print(paste0(round(mean(post_diff > 0) * 100, 1), 
2             "% of difference values are greater than 0."))

95% HDI: Lower bound = 1e-04, Upper bound = 1.8635.

97.1% of difference values are greater than 0.

Punishment Sensitivity

Code

1post_diff <- output_ctr$parVals$mu_P - output_trt$parVals$mu_P
2plotHDI(
3  post_diff, 
4  Title = "Difference in posterior distributions (CTR - TRT)", 
5  xLab = expression(P[diff]))
1print(paste0(round(mean(post_diff > 0) * 100, 1), 
2             "% of difference values are greater than 0."))

95% HDI: Lower bound = -2.4309, Upper bound = 1.0696.

22.4% of difference values are greater than 0.

As we can see, BA showed that there is a difference between the learning rate for positive feedback and reward sensitivity. One could argue that punishment sensitivity values also differ.

Summary and Comparison

The downside of the BA is that it is more computationally demanding. We haven’t spent too much time on the coding, mostly due to the help of the package, but the model fitting procedure itself can be time-consuming, depending on the computational power.

Another possible difficulty of BA is priors specifications. Ideally, we want to have an approximate idea of how the parameter values are distributed (based on literature, for example) and then check whether the experimental data support this hypothesis.

Overall MLE and BA lead to somewhat similar results. In both cases, we were able to make a conclusion that treatment affects humans’ reward sensitivity and the learning rate for a positive outcome (although for MLE approach p-value for reward sensitivity was a bit higher than 0.05).

Table 4: Results comparison between the methods
ParameterMLEBA
$\alpha^\text{reward}$PE = 0.2, 95% CI = [0.1, 0.3],
p-val < 0.01, Effect size = 1.87
PE = 0.22, 95% HDI = [0.15, 0.3]
$\alpha^\text{punishment}$PE = 0.09, 95% CI = [-0.1, 0.12],
p-val = 0.87, Effect size = 0.08
PE = 0.02, 95% HDI = [-0.1, 0.14]
$R$PE = 0.87, 95% CI = [-0.05, 1.79],
p-val = 0.06, Effect size = 0.89
PE = 0.91, 95% HDI = [0.01, 1.86]
$P$PE = -0.46, 95% CI = [-1.77, 0.84],
p-val = 0.46, Effect size = 0.34
PE = -0.67, 95% HDI = [-2.43, 1.07]

As it was mentioned at the very beginning of part 1, the purpose of these posts was to give an example of the model fitting, not to select the best approach. Both MLE and BA have their pros and cons and readers are encouraged to make their own decision which approach suits better. Readers can also visit hBayesDM articles for more examples.

References


  1. Seymour, B., Daw, N. D., Roiser, J. P., Dayan, P., & Dolan, R. (2012). Serotonin selectively modulates reward value in human decision-making. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32(17), 5833–5842. https://doi.org/10.1523/JNEUROSCI.0053-12.2012 ↩︎

  2. Ahn, W. Y., Haines, N., & Zhang, L. (2017). Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package. Computational Psychiatry, 1(0), 24. https://doi.org/10.1162/cpsy_a_00002 ↩︎

  3. Stan Development Team (2020). “RStan: the R interface to Stan.” R package version 2.21.2, http://mc-stan.org/↩︎

  4. Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A., & Rubin, D.B. (2013). Bayesian Data Analysis (3rd ed.). Chapman and Hall/CRC. https://doi.org/10.1201/b16018 ↩︎

  5. Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P. C. (2021). Rank-Normalization, Folding, and Localization: An Improved R-hat for Assessing Convergence of MCMC (with Discussion). Bayesian Analysis, 16(2). https://doi.org/10.1214/20-ba1221 ↩︎

  6. Vehtari, A., Gelman, A., & Gabry, J. (2016). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing, 27(5), 1413–1432. https://doi.org/10.1007/s11222-016-9696-4 ↩︎