WIT Press

Computational experiment to compare techniques in large datasets to measure credit banking risk in home equity loans

Price

Free (open access)

Volume

Volume 5 (2017), Issue 5

Pages

8

Page Range

771 - 779

Paper DOI

10.2495/CMEM-V5-N5-771-779

Copyright

WIT Press

Author(s)

A. PÉREZ-MARTÍN & M. VACA

Abstract

In the 1960s, coinciding with the massive demand for credit cards, financial companies needed a method to know their exposure to risk insolvency. It began applying credit-scoring techniques. In the 1980s credit-scoring techniques were extended to loans due to the increased demand for credit and computational progress. In 2004, new recommendations of the Basel Committee (as called Basel II) on banking supervision appeared. With the ensuing global financial crisis, a new document, Basel III, appeared. It introduced more demanding changes on the control of borrowed capital.

Nowadays, one of the main problems not addressed is the presence of large datasets. This research is focused on calculating probabilities of default in home equity loans, and measuring the computational efficiency of some statistical and data mining methods. In order to do these, some Monte Carlo experiments with known techniques and algorithms have been developed.

These computational experiments reveal that large datasets need BigData techniques and algorithms that yield faster and unbiased estimators.

Keywords

BigData; Credit Scoring; Monte Carlo; Discriminant analysis; Support Vector Machine.