TY - GEN
T1 - GAPS
T2 - 2022 IEEE International Conference on Big Data, Big Data 2022
AU - Daley, Brian
AU - Ratul, Qudrat E.Alahy
AU - Serra, Edoardo
AU - Cuzzocrea, Alfredo
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In an age of the growing use of Machine-learning, it has become an imperative task to be able to explain the processes behind the functions of many "black box"models. The explainability feature of artificial intelligence is key to building trust between humans and computers' algorithmic predictions. One of the main ways to generate this interpretability is through attribution methods, which produce importance values of each feature for a single instance in a dataset. There are many different ways of attribution for various Machine-learning models, including ones designed for specific models or "model agnostic"attribution methods - ones that do not require a specific model to achieve importance values. These attribution methods are valued because of their easily understood nature. While evaluation procedures exist such as generality and precision for rule-based explanation methods, these have not been used on attribution methods until recently. A recent experiment by Ratul et al. [1] proved that the two most popular local model-agnostic attribution methods, LIME and SHAP, have poor precision and generality. In this paper, we propose a new attribution method, the Generality and Precision Shapley Attributions (GAPS). To evaluate these models, we use the generality and precision equations used previously to evaluate the other models. We present our findings that GAPS produces higher generality and precision scores than the existing LIME and SHAP models.
AB - In an age of the growing use of Machine-learning, it has become an imperative task to be able to explain the processes behind the functions of many "black box"models. The explainability feature of artificial intelligence is key to building trust between humans and computers' algorithmic predictions. One of the main ways to generate this interpretability is through attribution methods, which produce importance values of each feature for a single instance in a dataset. There are many different ways of attribution for various Machine-learning models, including ones designed for specific models or "model agnostic"attribution methods - ones that do not require a specific model to achieve importance values. These attribution methods are valued because of their easily understood nature. While evaluation procedures exist such as generality and precision for rule-based explanation methods, these have not been used on attribution methods until recently. A recent experiment by Ratul et al. [1] proved that the two most popular local model-agnostic attribution methods, LIME and SHAP, have poor precision and generality. In this paper, we propose a new attribution method, the Generality and Precision Shapley Attributions (GAPS). To evaluate these models, we use the generality and precision equations used previously to evaluate the other models. We present our findings that GAPS produces higher generality and precision scores than the existing LIME and SHAP models.
KW - Attribution Methods
KW - Explainable Artificial Intelligence
KW - Generality and Precision
KW - Interpretable Machine-learning
UR - http://www.scopus.com/inward/record.url?scp=85147930441&partnerID=8YFLogxK
U2 - 10.1109/BigData55660.2022.10021127
DO - 10.1109/BigData55660.2022.10021127
M3 - Conference contribution
AN - SCOPUS:85147930441
T3 - Proceedings - 2022 IEEE International Conference on Big Data, Big Data 2022
SP - 5444
EP - 5450
BT - Proceedings - 2022 IEEE International Conference on Big Data, Big Data 2022
A2 - Tsumoto, Shusaku
A2 - Ohsawa, Yukio
A2 - Chen, Lei
A2 - Van den Poel, Dirk
A2 - Hu, Xiaohua
A2 - Motomura, Yoichi
A2 - Takagi, Takuya
A2 - Wu, Lingfei
A2 - Xie, Ying
A2 - Abe, Akihiro
A2 - Raghavan, Vijay
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 17 December 2022 through 20 December 2022
ER -