Abstract
SLPs (single layer perceptrons) often exhibit reasonable generalization performance on many problems of interest. However, due to the well known limitations of SLPs very little effort has been made to improve their performance. This paper proposes a method for improving the performance of SLPs called `wagging' (weight averaging). This method involves training several different SLPs on the same training data, and then averaging their weights to obtain a single SLP. The performance of the wagged SLP is compared with other more complex learning algorithms (bp, c4.5, ib1, MML, etc) on 15 data sets from real world problem domains. Surprisingly, the wagged SIP has better average generalization performance than any of the other learning algorithms on the problems tested. This result is explained and analyzed. The analysis includes looking at the performance characteristics of the standard delta rule training algorithm for SLPs and the correlation between training and test set scores as training progresses.
Original language | English |
---|---|
Pages | 1608-1613 |
Number of pages | 6 |
State | Published - 1999 |
Event | International Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA Duration: 10 Jul 1999 → 16 Jul 1999 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN'99) |
---|---|
City | Washington, DC, USA |
Period | 10/07/99 → 16/07/99 |