WIT Press


Applications Of Penalized Binary Choice Estimators With Improved Predictive Fit

Price

Free (open access)

Volume

43

Pages

10

Published

2006

Size

424 kb

Paper DOI

10.2495/CF060201

Copyright

WIT Press

Author(s)

D. J. Miller &W.-H. Liu

Abstract

This paper presents applications of penalized ML estimators for binary choice problems. The penalty is based on an information theoretic measure of predictive fit for binary choice outcomes, and the resulting penalized ML estimators are asymptotically equivalent to the associated ML estimators but may have a better in-sample and out-of-sample predictive fit in finite samples. The proposed methods are demonstrated with a set of Monte Carlo experiments and two examples from the applied finance literature. Keywords: binary choice, information theory, penalized ML, prediction. 1 Introduction The sampling properties of the maximum likelihood (ML) estimators for binary choice problems are well established. Much of the existing research has focused on the properties of estimators for the response coefficients, which is important for model selection and estimating the marginal effects of the explanatory variables. However, the use of fitted models to predict choices made by agents outside the current sample is very important in practice but has attracted less attention from researchers. In some cases, the ML estimators may exhibit poor in-sample and out-of-sample predictive performance, especially when the sample size is small. Although several useful predictive goodness-of-fit measures have been proposed, there are no standard remedies for poor in-sample or out-of-sample predictive fit. As noted by Train [1], there is a conceptual problem with measuring the insample predictive fit – the predicted choice probabilities are defined with respect to the relative frequency of choices in repeated samples and do not indicate the actual

Keywords

binary choice, information theory, penalized ML, prediction.