WIT Press


Quasi-Newton Methods For Training Neural Networks

Price

Free (open access)

Volume

2

Pages

13

Published

1993

Size

912 kb

Paper DOI

10.2495/AIENG930242

Copyright

WIT Press

Author(s)

B. Robitaille, B. Marcos, M. Veillette & G. Payre

Abstract

Quasi-Newton methods for training neural networks B. Robitaille, B. Marcos, M. Veillette & G. Payre Chemical Engineering Department, Universite de Sherbrooke, Sherbrooke, Quebec, Canada, JlK 2R1 ABSTRACT The backpropagation algorithm is the most popular procedure to train self-learning feedforward neural networks. However, the rate of convergence of this algorithm is slow because the backpropagation algorithm is mainly a steepest descent method. Several researchers have proposed other approaches to improve the rate of convergence: conjugate gradient methods, dynamic modification of learning parameters, full quasi-Newton or Newton methods, stochastic methods, etc. Quasi-Newton methods were criticized because they require significant computation time and memory space to perform the update of the hessian matrix. This paper proposes a modification to the classical approach of the quasi-Newton method that takes into account the structure of the network. With this modification,

Keywords