Abstract
The major limitations of conventional learning algorithms are attributed to local minima and slow convergence speed. This paper presents a novel heuristics approach for neural networks global learning algorithm. The proposed algorithm is based upon the least-squares (LS) method to maintain the fast convergence speed and a Penalty (PEN)approach to solve the problem of local minima. The penalty term superimposes into the error surface, which likely to provide a way of escape from the local minima when the convergence stalls. The choice and adjustment for the penalty factor are also derived to demonstrate the effect of the penalty term and to ensure the convergence of the algorithm. The developed learning algorithm is applied to several problems of classification application. In all the tested problems, the proposed algorithm outperforms other conventional algorithms in terms of convergence speed and the ability of escaping from the local minima.
Original language | English |
---|---|
Pages (from-to) | 115-131 |
Number of pages | 17 |
Journal | Neurocomputing |
Volume | 25 |
Issue number | 1-3 |
DOIs | |
Publication status | Published - Apr 1999 |
Externally published | Yes |
Keywords
- Global learning algorithm
- Least-squares method
- Multilayer neural networks
- Penalized optimization
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence