Lazy training: Improving backpropagation learning through network interaction

M. E. Rimer, T. L. Andersen, T. R. Martinez

Research output: Contribution to conferencePaperpeer-review

4 Scopus citations

Abstract

Backpropagation, similar to most high-order learning algorithms, is prone to overfitting. We address this issue by introducing interactive training (IT), a logical extension to backpropagation training that employs interaction among multiple networks. This method is based on the theory that centralized control is more effective for learning in deep problem spaces in a multi-agent paradigm [25]. IT methods allow networks to work together to form more complex systems while not restraining their individual ability to specialize. Lazy training, an implementation of IT that minimizes misclassification error, is presented. Lazy training discourages overfitting and is conductive to higher accuracy in multiclass problems than standard backpropagation. Experiments on a large, real world OCR data set have shown interactive training to significantly increase generalization accuracy, from 97.86% to 99.11%. These results are supported by theoretical and conceptual extensions from algorithmic to interactive training models.

Original languageEnglish
Pages2007-2012
Number of pages6
StatePublished - 2001
EventInternational Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States
Duration: 15 Jul 200119 Jul 2001

Conference

ConferenceInternational Joint Conference on Neural Networks (IJCNN'01)
Country/TerritoryUnited States
CityWashington, DC
Period15/07/0119/07/01

Fingerprint

Dive into the research topics of 'Lazy training: Improving backpropagation learning through network interaction'. Together they form a unique fingerprint.

Cite this