Attribution Methods Assessment for Interpretable Machine Learning

Alfredo Cuzzocrea, Qudrat E. Alahy Ratul, Islam Belmerabet, Edoardo Serra

Research output: Contribution to journalConference articlepeer-review

Abstract

In this study, we introduce a generic experimental framework for measuring the degree of attribution methodologies generality and precision in terms of machine learning interpretability. In addition, we detail a way for gauging the consistency of two attribution approaches. In our experimental work, we concentrate on two well-known model-independent attribution techniques, namely SHAP and LIME, and evaluate them using two applications in the attack detection sector. Our introduced methodology demonstrates the lack of precision, generality, and consistency in both LIME and SHAP. As a result, attribution research needs to be examined more carefully.

Original languageEnglish
Pages (from-to)65-75
Number of pages11
JournalCEUR Workshop Proceedings
Volume3478
StatePublished - 2023
Event31st Symposium of Advanced Database Systems, SEBD 2023 - Galzingano Terme, Italy
Duration: 2 Jul 20235 Jul 2023

Keywords

  • Artificial Intelligence
  • Feature Attribution Methods
  • Machine Learning Interpretability

Fingerprint

Dive into the research topics of 'Attribution Methods Assessment for Interpretable Machine Learning'. Together they form a unique fingerprint.

Cite this