Speaker: Alan Lenarcic, Harvard University
Title: Two lassos Instead of One?
Date: Thursday, November 12, 2009
Time: 10 a.m.
Location: TA 3, Otowi Side Rooms A&B
Abstract: Penalty based methods have gone into great efforts to design
penalties with optimal shrinkage, model selection, and smoothness to compete
against the Lasso (Least Absolute Sum of Squares Operator). The success of
modified penalties depends on the regime in which the true model has
occurred. The Bayesian attitude suspects that the behavior and use of
penalty-based methods comes from their expression of subjective prior
information of the user, and a Bayesian would prefer to design a prior
that explicitly and exactly matches assumptions. While, a typical Bayesian
model selection algorithm would have to explore the whole posterior space,
penalty methods choose only a single point estimate and have considerable
use in analyses with time constraints and datasets are large. How might
Bayesians take advantage of penalty regression advances? Our answer is to
have a prior mixture of two Lasso priors, one approximating a sample density
for active factors, the other approximating the concentration of non-active
factors which should have zero coefficients. The EM algorithm can distinguish
factor membership, resulting in weighted Lasso algorithms. Thus we have a
flexible frame-work and theory that can be applied to any implemented Lasso
scheme, our focus being model selection in linear regression and covariance
selection for Gaussian graphical models in multivariate data.