Provably convergent dynamic training method for multi-layer perceptron networks

Tim L. Andersen, Tony R. Martinez

Research output: Contribution to conferencePaperpeer-review

1 Scopus citations

Abstract

This paper presents a new method for training multi-layer perceptron networks called DMP1 (Dynamic Multilayer Perceptron 1). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The individual nodes of the network are trained using a gentetic algorithm. The method is capable of handling real-valued inputs and a proof is given concerning its convergence properties of the basic model. Simulation results show that DMP1 performs favorably in comparison with other learning algorithms.

Original languageEnglish
Pages77-84
Number of pages8
StatePublished - 1995
EventProceedings of the 1995 RNNS/IEEE 2nd International Symposium on Neuroinformatics and Neurocomputers - Rostov-on-Don, Russia
Duration: 20 Sep 199523 Sep 1995

Conference

ConferenceProceedings of the 1995 RNNS/IEEE 2nd International Symposium on Neuroinformatics and Neurocomputers
CityRostov-on-Don, Russia
Period20/09/9523/09/95

Fingerprint

Dive into the research topics of 'Provably convergent dynamic training method for multi-layer perceptron networks'. Together they form a unique fingerprint.

Cite this