Abstract
Neural Networks (NNs) have been shown to be accurate classifiers in many domains, often superior to other statistical and data mining techniques. Unfortunately, NNs do not provide an easy process to explain how they arrived at their accurate results and this has somewhat limited their use within organisations, as managers desire both accuracy and understanding. A stream of research has developed focusing on extracting 'knowledge' from within NNs, primarily in the form of rule extraction algorithms. However, there is a lack of empirical studies that compare existing algorithms on relevant performance measures in realistic settings. Therefore, this study begins to fill this gap by comparing two approaches to extracting IF-THEN rules from feedforward NN on large realistic data sets. The results show a significant difference in the performance of the two algorithms depending on the knowledge structure present in the data set. Implications for future research and for organisational use of NNs are also briefly discussed.
| Original language | English |
|---|---|
| Pages (from-to) | 62-74 |
| Number of pages | 13 |
| Journal | International Journal of Management and Decision Making |
| Volume | 9 |
| Issue number | 1 |
| DOIs | |
| State | Published - Dec 2008 |
Keywords
- Algorithm
- Classifier systems
- Data mining
- Decision support
- Feedforward neural network
- Knowledge extraction
- Neural Networks
- NNs
- Rules
Fingerprint
Dive into the research topics of 'Assessing extracted knowledge from classifier neural networks: An exploratory empirical study'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver