Abstract: Most of the current, widely accepted methods of model selection focus
on the ability of models to fit a dataset, e.g. AIC, BIC, Mallows $C_p $,
adjusted $R2$, and Stochastic Search Variable Selection. These methods also summarize
the fit of a model as a single number. We present a model selection procedure that
instead focuses on a model's ability to make predictions in a user-specified region.
Further, our method summarizes this ability as a distribution of values instead of
as a single number. This approach allows the user to evaluate various models in the
cases of interpolation, extrapolation, or both. The method also allows for improvement
of prediction to be balanced with the incremental cost of additional covariates. An
example is presented for modeling system reliability as a function of age and other
usage measures using probit regression models, but the methodology generalizes to many
classes of regression models.
Joint work with Christine M. Anderson-Cook, CCS-6.