Little neuron that could

Tim Andersen, Tony Martinez

Research output: Contribution to conferencePaperpeer-review

2 Scopus citations

Abstract

SLPs (single layer perceptrons) often exhibit reasonable generalization performance on many problems of interest. However, due to the well known limitations of SLPs very little effort has been made to improve their performance. This paper proposes a method for improving the performance of SLPs called `wagging' (weight averaging). This method involves training several different SLPs on the same training data, and then averaging their weights to obtain a single SLP. The performance of the wagged SLP is compared with other more complex learning algorithms (bp, c4.5, ib1, MML, etc) on 15 data sets from real world problem domains. Surprisingly, the wagged SIP has better average generalization performance than any of the other learning algorithms on the problems tested. This result is explained and analyzed. The analysis includes looking at the performance characteristics of the standard delta rule training algorithm for SLPs and the correlation between training and test set scores as training progresses.

Original languageEnglish
Pages1608-1613
Number of pages6
StatePublished - 1999
EventInternational Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA
Duration: 10 Jul 199916 Jul 1999

Conference

ConferenceInternational Joint Conference on Neural Networks (IJCNN'99)
CityWashington, DC, USA
Period10/07/9916/07/99

Fingerprint

Dive into the research topics of 'Little neuron that could'. Together they form a unique fingerprint.

Cite this