Abstract
In this study, we introduce a generic experimental framework for measuring the degree of attribution methodologies generality and precision in terms of machine learning interpretability. In addition, we detail a way for gauging the consistency of two attribution approaches. In our experimental work, we concentrate on two well-known model-independent attribution techniques, namely SHAP and LIME, and evaluate them using two applications in the attack detection sector. Our introduced methodology demonstrates the lack of precision, generality, and consistency in both LIME and SHAP. As a result, attribution research needs to be examined more carefully.
Original language | English |
---|---|
Pages (from-to) | 65-75 |
Number of pages | 11 |
Journal | CEUR Workshop Proceedings |
Volume | 3478 |
State | Published - 2023 |
Event | 31st Symposium of Advanced Database Systems, SEBD 2023 - Galzingano Terme, Italy Duration: 2 Jul 2023 → 5 Jul 2023 |
Keywords
- Artificial Intelligence
- Feature Attribution Methods
- Machine Learning Interpretability