TY - GEN
T1 - An Explainable Feature Selection Approach for Fair Machine Learning
AU - YANG, Zhi
AU - WANG, Ziming
AU - HUANG, Changwu
AU - YAO, Xin
N1 - This work was supported by the National Natural Science Foundation of China (Grant No. 62250710682), the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No.2017ZT07X386), the Shenzhen Science and Technology Program (Grant No. KQTD2016112514355531), and the Research Institute of Trustworthy Autonomous Systems.
PY - 2023
Y1 - 2023
N2 - As machine learning (ML) algorithms are extensively adopted in various fields to make decisions of importance to human beings and our society, the fairness issue in algorithm decision-making has been widely studied. To mitigate unfairness in ML, many techniques have been proposed, including pre-processing, in-processing, and post-processing approaches. In this work, we propose an explainable feature selection (ExFS) method to improve the fairness of ML by recursively eliminating features that contribute to unfairness based on the feature attribution explanations of the model’s predictions. To validate the effectiveness of our proposed ExFS method, we compare our approach with other fairness-aware feature selection methods on several commonly used datasets. The experimental results show that ExFS can effectively improve fairness by recursively dropping some features that contribute to unfairness. The ExFS method generally outperforms the compared filter-based feature selection methods in terms of fairness and achieves comparable results to the compared wrapper-based feature selection methods. In addition, our method can provide explanations for the rationale underlying this fairness-aware feature selection mechanism. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
AB - As machine learning (ML) algorithms are extensively adopted in various fields to make decisions of importance to human beings and our society, the fairness issue in algorithm decision-making has been widely studied. To mitigate unfairness in ML, many techniques have been proposed, including pre-processing, in-processing, and post-processing approaches. In this work, we propose an explainable feature selection (ExFS) method to improve the fairness of ML by recursively eliminating features that contribute to unfairness based on the feature attribution explanations of the model’s predictions. To validate the effectiveness of our proposed ExFS method, we compare our approach with other fairness-aware feature selection methods on several commonly used datasets. The experimental results show that ExFS can effectively improve fairness by recursively dropping some features that contribute to unfairness. The ExFS method generally outperforms the compared filter-based feature selection methods in terms of fairness and achieves comparable results to the compared wrapper-based feature selection methods. In addition, our method can provide explanations for the rationale underlying this fairness-aware feature selection mechanism. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
KW - Ethics of AI
KW - Fairness in machine learning
KW - Feature attribution explanation
KW - Feature selection
KW - Group fairness
UR - http://www.scopus.com/inward/record.url?scp=85174595809&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-44198-1_7
DO - 10.1007/978-3-031-44198-1_7
M3 - Conference paper (refereed)
SN - 9783031441974
T3 - Lecture Notes in Computer Science
SP - 75
EP - 86
BT - Artificial Neural Networks and Machine Learning : ICANN 2023 : 32nd International Conference on Artificial Neural Networks, Heraklion, Crete, Greece, September 26–29, 2023, Proceedings, Part VIII
A2 - ILIADIS, Lazaros
A2 - PAPALEONIDAS, Antonios
A2 - ANGELOV, Plamen
A2 - JAYNE, Chrisina
PB - Springer
T2 - 32nd International Conference on Artificial Neural Networks
Y2 - 26 September 2023 through 29 September 2023
ER -