Proving the efficacy of complementary inputs for multilayer neural networks

Timothy L. Andersen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper proposes and discusses a backpropagation-based training approach for multilayer networks that counteracts the tendency that typical backpropagation-based training algorithms have to favor examples that have large input feature values. This problem can occur in any real valued input space, and can create a surprising degree of skew in the learned decision surface even with relatively simple training sets. The proposed method involves modifying the original input feature vectors in the training set by appending complementary inputs, which essentially doubles the number of inputs to the network. This paper proves that this modification does not increase the network complexity, by showing that it is possible to map the network with complimentary inputs back into the original feature space.

Original languageEnglish
Title of host publication2011 International Joint Conference on Neural Networks, IJCNN 2011 - Final Program
Pages2062-2066
Number of pages5
DOIs
StatePublished - 2011
Event2011 International Joint Conference on Neural Network, IJCNN 2011 - San Jose, CA, United States
Duration: 31 Jul 20115 Aug 2011

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2011 International Joint Conference on Neural Network, IJCNN 2011
Country/TerritoryUnited States
CitySan Jose, CA
Period31/07/115/08/11

Fingerprint

Dive into the research topics of 'Proving the efficacy of complementary inputs for multilayer neural networks'. Together they form a unique fingerprint.

Cite this