Intrinsic Characterization And Scoring Of Classes
Free (open access)
H-G Zimmerman, Ch Tietz & R Grothmann
The paper is about classification and scoring by neural networks. Especially in marketing applications, classification tasks are often high dimensional and only small data sets are available for the estimation of the model parameters. Further on, the classes are often unbalanced and can not be separated easily from each other. Since the class characteristics are often incomplete, the handling of missing data is also a critical issue. For these problems, we employ an autoassociator neural network, which incorporates prior knowledge about the classification task. We propose to transfer the classification into a scoring task. The reconstruction error of the autoassociator serves as a membership measure for e.g. the buyers or non-buyers in a marketing application. Within this framework, a so-called ‘square’ layer with a quadratic squashing function puts us in a better position to model localized structures in the data, so-called cleaning noise allows to construct a replacement for missing values in the data set. The usefulness of our neural network architectures is illustrated by a real-world scoring application in which we separate those persons from a database who show the greatest potential to buy a new product. 1 Introduction Classification tasks arise in diverse data mining applications, such as credit scoring in financial services, customer segmentation in marketing research, or fraud detection in telecommunication services [1, 2]. The spectrum of classification techniques ranges from decision trees, logistic regression, genetic algorithms, or neural networks to various statistical methods . Unfortunately, most of these algorithms suffer from some serious problems which come along with real-world classification tasks . These difficulties are: High dimensional spaces: Typically, real-world classification problems are high dimensional and only small data sets are available for the estimation of the model parameters. The high dimensional space and the inherent sparseness of the data points, is a serious drawback for density estimation.