Abstract
This paper examines the tendency of backpropagation-based training algorithms to favor examples that have large input feature values, in terms of the ability of such examples to influence the weights of the network, and shows that this tendency can lead to sub-optimal decision surfaces. We propose a method for counteracting this tendency that modifies the original input feature vector through the addition of complementary inputs.
Original language | English |
---|---|
Pages | 263-267 |
Number of pages | 5 |
State | Published - 2002 |
Event | 2002 International Joint Conference on Neural Networks (IJCNN '02) - Honolulu, HI, United States Duration: 12 May 2002 → 17 May 2002 |
Conference
Conference | 2002 International Joint Conference on Neural Networks (IJCNN '02) |
---|---|
Country/Territory | United States |
City | Honolulu, HI |
Period | 12/05/02 → 17/05/02 |