### Abstract

Generalized linear mixed models (GLMMs) have been applied widely in the analysis of longitudinal data. This model confers two important advantages, namely, the flexibility to include random effects and the ability to make inference about complex covariances. In practice, however, the inference of variance components can be a difficult task due to the complexity of the model itself and the dimensionality of the covariance matrix of random effects. Here we first discuss for GLMMs the relation between Bayesian posterior estimates and penalized quasi-likelihood (PQL) estimates, based on the generalization of Harville's result for general linear models. Next, we perform fully Bayesian analyses for the random covariance matrix using three different reference priors, two with Jeffreys' priors derived from approximate likelihoods and one with the approximate uniform shrinkage prior. Computations are carried out via the combination of asymptotic approximations and Markov chain Monte Carlo methods. Under the criterion of the squared Euclidean norm, we compare the performances of Bayesian estimates of variance components with that of PQL estimates when the responses are non-normal, and with that of the restricted maximum likelihood (REML) estimates when data are assumed normal. Three applications and simulations of binary, normal, and count responses with multiple random effects and of small sample sizes are illustrated. The analyses examine the differences in estimation performance when the covariance structure is complex, and demonstrate the equivalence between PQL and the posterior modes when the former can be derived. The results also show that the Bayesian approach, particularly under the approximate Jeffreys' priors, outperforms other procedures.

Original language | English |
---|---|

Pages (from-to) | 587-604 |

Number of pages | 18 |

Journal | Computational Statistics |

Volume | 23 |

Issue number | 4 |

DOIs | |

Publication status | Published - 2008 Oct 1 |

### Fingerprint

### All Science Journal Classification (ASJC) codes

- Statistics and Probability
- Statistics, Probability and Uncertainty
- Computational Mathematics

### Cite this

*Computational Statistics*,

*23*(4), 587-604. https://doi.org/10.1007/s00180-007-0100-x

}

*Computational Statistics*, vol. 23, no. 4, pp. 587-604. https://doi.org/10.1007/s00180-007-0100-x

**Computation of reference Bayesian inference for variance components in longitudinal studies.** / Tsai, Miao-Yu; Hsiao, Chuhsing K.

Research output: Contribution to journal › Article

TY - JOUR

T1 - Computation of reference Bayesian inference for variance components in longitudinal studies

AU - Tsai, Miao-Yu

AU - Hsiao, Chuhsing K.

PY - 2008/10/1

Y1 - 2008/10/1

N2 - Generalized linear mixed models (GLMMs) have been applied widely in the analysis of longitudinal data. This model confers two important advantages, namely, the flexibility to include random effects and the ability to make inference about complex covariances. In practice, however, the inference of variance components can be a difficult task due to the complexity of the model itself and the dimensionality of the covariance matrix of random effects. Here we first discuss for GLMMs the relation between Bayesian posterior estimates and penalized quasi-likelihood (PQL) estimates, based on the generalization of Harville's result for general linear models. Next, we perform fully Bayesian analyses for the random covariance matrix using three different reference priors, two with Jeffreys' priors derived from approximate likelihoods and one with the approximate uniform shrinkage prior. Computations are carried out via the combination of asymptotic approximations and Markov chain Monte Carlo methods. Under the criterion of the squared Euclidean norm, we compare the performances of Bayesian estimates of variance components with that of PQL estimates when the responses are non-normal, and with that of the restricted maximum likelihood (REML) estimates when data are assumed normal. Three applications and simulations of binary, normal, and count responses with multiple random effects and of small sample sizes are illustrated. The analyses examine the differences in estimation performance when the covariance structure is complex, and demonstrate the equivalence between PQL and the posterior modes when the former can be derived. The results also show that the Bayesian approach, particularly under the approximate Jeffreys' priors, outperforms other procedures.

AB - Generalized linear mixed models (GLMMs) have been applied widely in the analysis of longitudinal data. This model confers two important advantages, namely, the flexibility to include random effects and the ability to make inference about complex covariances. In practice, however, the inference of variance components can be a difficult task due to the complexity of the model itself and the dimensionality of the covariance matrix of random effects. Here we first discuss for GLMMs the relation between Bayesian posterior estimates and penalized quasi-likelihood (PQL) estimates, based on the generalization of Harville's result for general linear models. Next, we perform fully Bayesian analyses for the random covariance matrix using three different reference priors, two with Jeffreys' priors derived from approximate likelihoods and one with the approximate uniform shrinkage prior. Computations are carried out via the combination of asymptotic approximations and Markov chain Monte Carlo methods. Under the criterion of the squared Euclidean norm, we compare the performances of Bayesian estimates of variance components with that of PQL estimates when the responses are non-normal, and with that of the restricted maximum likelihood (REML) estimates when data are assumed normal. Three applications and simulations of binary, normal, and count responses with multiple random effects and of small sample sizes are illustrated. The analyses examine the differences in estimation performance when the covariance structure is complex, and demonstrate the equivalence between PQL and the posterior modes when the former can be derived. The results also show that the Bayesian approach, particularly under the approximate Jeffreys' priors, outperforms other procedures.

UR - http://www.scopus.com/inward/record.url?scp=53549108901&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=53549108901&partnerID=8YFLogxK

U2 - 10.1007/s00180-007-0100-x

DO - 10.1007/s00180-007-0100-x

M3 - Article

AN - SCOPUS:53549108901

VL - 23

SP - 587

EP - 604

JO - Computational Statistics

JF - Computational Statistics

SN - 0943-4062

IS - 4

ER -