Logo

Publikacije (173)

Nazad
D. Schmidt, E. Makalic

Global-local shrinkage hierarchies are an important innovation in Bayesian estimation. We propose the use of log-scale distributions as a novel basis for generating familes of prior distributions for local shrinkage hyperparameters. By varying the scale parameter one may vary the degree to which the prior distribution promotes sparsity in the coefficient estimates. By examining the class of distributions over the logarithm of the local shrinkage parameter that have log-linear, or sub-log-linear tails, we show that many standard prior distributions for local shrinkage parameters can be unified in terms of the tail behaviour and concentration properties of their corresponding marginal distributions over the coefficients $\beta_j$. We derive upper bounds on the rate of concentration around $|\beta_j|=0$, and the tail decay as $|\beta_j| \to \infty$, achievable by this wide class of prior distributions. We then propose a new type of ultra-heavy tailed prior, called the log-$t$ prior with the property that, irrespective of the choice of associated scale parameter, the marginal distribution always diverges at $\beta_j = 0$, and always possesses super-Cauchy tails. We develop results demonstrating when prior distributions with (sub)-log-linear tails attain Kullback--Leibler super-efficiency and prove that the log-$t$ prior distribution is always super-efficient. We show that the log-$t$ prior is less sensitive to misspecification of the global shrinkage parameter than the horseshoe or lasso priors. By incorporating the scale parameter of the log-scale prior distributions into the Bayesian hierarchy we derive novel adaptive shrinkage procedures. Simulations show that the adaptive log-$t$ procedure appears to always perform well, irrespective of the level of sparsity or signal-to-noise ratio of the underlying model.

D. Schmidt, E. Makalic

Global-local shrinkage hierarchies are an important, recent innovation in Bayesian estimation of regression models. In this paper we propose to use log-scale distributions as a basis for generating familes of flexible prior distributions for the local shrinkage hyperparameters within such hierarchies. An important property of the log-scale priors is that by varying the scale parameter one may vary the degree to which the prior distribution promotes sparsity in the coefficient estimates, all the way from the simple proportional shrinkage ridge regression model up to extremely heavy tailed, sparsity inducing prior distributions. By examining the class of distributions over the logarithm of the local shrinkage parameter that have log-linear, or sub-log-linear tails, we show that many of standard prior distributions for local shrinkage parameters can be unified in terms of the tail behaviour and concentration properties of their corresponding marginal distributions over the coefficients $\beta_j$. We use these results to derive upper bounds on the rate of concentration around $|\beta_j|=0$, and the tail decay as $|\beta_j| \to \infty$, achievable by this class of prior distributions. We then propose a new type of ultra-heavy tailed prior, called the log-$t$ prior, which exhibits the property that, irrespective of the choice of associated scale parameter, the induced marginal distribution over $\beta_j$ always diverge at $\beta_j = 0$, and always possesses super-Cauchy tails. Finally, we propose to incorporate the scale parameter in the log-scale prior distributions into the Bayesian hierarchy and derive an adaptive shrinkage procedure. Simulations show that in contrast to a number of standard prior distributions, our adaptive log-$t$ procedure appears to always perform well, irrespective of the level of sparsity or signal-to-noise ratio of the underlying model.

E. Roekel, P. Dugué, D. English, J. Joo, E. Wong, E. Makalic, M. Southey, G. Giles et al.

R. Milne, K. Kuchenbaecker, K. Michailidou, J. Beesley, S. Kar, S. Lindström, Shirley Hui, Audrey Lemaçon et al.

K. Michailidou, S. Lindström, J. Dennis, J. Beesley, Shirley Hui, S. Kar, Audrey Lemaçon, P. Soucy et al.

Zemei Xu, D. Schmidt, E. Makalic, G. Qian, J. Hopper

Most estimates for penalised linear regression can be viewed as posterior modes for an appropriate choice of prior distribution. Bayesian shrinkage methods, particularly the horseshoe estimator, have recently attracted a great deal of attention in the problem of estimating sparse, high-dimensional linear models. This paper extends these ideas, and presents a Bayesian grouped model with continuous global-local shrinkage priors to handle complex group hierarchies that include overlapping and multilevel group structures. As the posterior mean is never a sparse estimate of the linear model coefficients, we extend the recently proposed decoupled shrinkage and selection (DSS) technique to the problem of selecting groups of variables from posterior samples. To choose a final, sparse model, we also adapt generalised information criteria approaches to the DSS framework. To ensure that sparse groups, in which only a few predictors are active, can be effectively identified, we provide an alternative degrees of freedom estimator for sparse Bayesian linear models that takes into account the effects of shrinkage on the model coefficients. Simulations and real data analysis using our proposed method show promising performance in terms of correct identification of active and inactive groups, and prediction, in comparison with a Bayesian grouped slab-and-spike approach.

Zemei Xu, D. Schmidt, E. Makalic, G. Qian, J. Hopper

Most estimates for penalised linear regression can be viewed as posterior modes for an appropriate choice of prior distribution. Bayesian shrinkage methods, particularly the horseshoe estimator, have recently attracted a great deal of attention in the problem of estimating sparse, high-dimensional linear models. This paper extends these ideas, and presents a Bayesian grouped model with continuous global-local shrinkage priors to handle complex group hierarchies that include overlapping and multilevel group structures. As the posterior mean is never a sparse estimate of the linear model coefficients, we extend the recently proposed decoupled shrinkage and selection (DSS) technique to the problem of selecting groups of variables from posterior samples. To choose a final, sparse model, we also adapt generalised information criteria approaches to the DSS framework. To ensure that sparse groups, in which only a few predictors are active, can be effectively identified, we provide an alternative degrees of freedom estimator for sparse, Bayesian linear models that takes into account the effects of shrinkage on the model coefficients. Simulations and real data analysis using our proposed method show promising performance in terms of correct identification of active and inactive groups, and prediction, in comparison with a Bayesian grouped slab-and-spike approach.

Y. Feng, Kelly Cho, S. Lindstrom, P. Kraft, Jean B. Cormack, Kendra Peter T. Graham David V. Christopher K. Jane W. Ji Blalock Campbell Casey Conti Edlund Figueiredo Jam, Kendra L. Blalock, P. Campbell et al.

L. FitzGerald, Haroon Naeem, E. Makalic, D. Schmidt, J. Dowty, J. Joo, Chol-hee Jung, J. Bassett et al.

Global measures of peripheral blood DNA methylation have been associated with risk of some malignancies, including breast, bladder, and gastric cancer. Here, we examined genome‐wide measures of peripheral blood DNA methylation in prostate cancer and its non‐aggressive and aggressive disease forms.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više