Multi-objective Feature Attribution Explanation For Explainable Machine Learning

Ziming WANG, Changwu HUANG, Yun LI, Xin YAO

Research output: Journal PublicationsJournal Article (refereed)peer-review

5 Citations (Scopus)

Abstract

The feature attribution-based explanation (FAE) methods, which indicate how much each input feature contributes to the model’s output for a given data point, are one of the most popular categories of explainable machine learning techniques. Although various metrics have been proposed to evaluate the explanation quality, no single metric could capture different aspects of the explanations. Different conclusions might be drawn using different metrics. Moreover, during the processes of generating explanations, existing FAE methods either do not consider any evaluation metric or only consider the faithfulness of the explanation, failing to consider multiple metrics simultaneously. To address this issue, we formulate the problem of creating FAE explainable models as a multi-objective learning problem that considers multiple explanation quality metrics simultaneously. We first reveal conflicts between various explanation quality metrics, including faithfulness, sensitivity, and complexity. Then, we define the considered multi-objective explanation problem and propose a multi-objective feature attribution explanation (MOFAE) framework to address this newly defined problem. Subsequently, we instantiate the framework by simultaneously considering the explanation’s faithfulness, sensitivity, and complexity. Experimental results comparing with six state-of-the-art FAE methods on eight datasets demonstrate that our method can optimize multiple conflicting metrics simultaneously and can provide explanations with higher faithfulness, lower sensitivity, and lower complexity than the compared methods. Moreover, the results have shown that our method has better diversity, i.e., it provides various explanations that achieve different trade-offs between multiple conflicting explanation quality metrics. Therefore, it can provide tailored explanations to different stakeholders based on their specific requirements.
Original languageEnglish
Article number2
Number of pages32
JournalACM Transactions on Evolutionary Learning and Optimization
Volume4
Issue number1
Early online date29 Aug 2023
DOIs
Publication statusPublished - 23 Feb 2024
Externally publishedYes

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62250710682), the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No. 2017ZT07X386), the Science and Technology Innovation Committee Foundation of Shenzhen (Grant No. ZDSYS201703031748284), and the Research Institute of Trustworthy Autonomous Systems.

Fingerprint

Dive into the research topics of 'Multi-objective Feature Attribution Explanation For Explainable Machine Learning'. Together they form a unique fingerprint.

Cite this