Cross validation and MLP architecture selection

Tim Andersen, Tony Martinez

Research output: Contribution to conferencePaperpeer-review

46 Scopus citations

Abstract

The performance of cross validation (CV) based MLP architecture selection is examined using 14 real world problem domains. When testing many different network architectures the results show that CV is only slightly more likely than random to select the optimal network architecture, and that the strategy of using the simplest available network architecture performs better than CV in this case. Experimental evidence suggests several reasons for the poor performance of CV. In addition, three general strategies which lead to significant increase in the performance of CV are proposed. While this paper focuses on using CV to select the optimal MLP architecture, the strategies are also applicable when CV is used to select between several different learning models, whether the models are neural networks, decision trees, or other types of learning algorithms. When using these strategies the average generalization performance of the network architecture which CV selects is significantly better than the performance of several other well known machine learning algorithms on the data sets tested.

Original languageEnglish
Pages1614-1619
Number of pages6
StatePublished - 1999
EventInternational Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA
Duration: 10 Jul 199916 Jul 1999

Conference

ConferenceInternational Joint Conference on Neural Networks (IJCNN'99)
CityWashington, DC, USA
Period10/07/9916/07/99

Fingerprint

Dive into the research topics of 'Cross validation and MLP architecture selection'. Together they form a unique fingerprint.

Cite this