Abstract
Artificial neural networks provide an effective empirical predictive model for pattern classification. However, using complex neural networks to learn very large training sets is often problematic, imposing prohibitive time constraints on the training process. We present four practical methods for dramatically decreasing training time through dynamic stochastic sample presentation, a technique we call speed training. These methods are shown to be robust to retaining generalization accuracy over a diverse collection of real world data sets. In particular, the SET technique achieves a training speedup of 4278% on a large OCR database with no detectable loss in generalization.
Original language | English |
---|---|
Pages | 2661-2666 |
Number of pages | 6 |
State | Published - 2001 |
Event | International Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States Duration: 15 Jul 2001 → 19 Jul 2001 |
Conference
Conference | International Joint Conference on Neural Networks (IJCNN'01) |
---|---|
Country/Territory | United States |
City | Washington, DC |
Period | 15/07/01 → 19/07/01 |