Mitigating Unfairness via Evolutionary Multiobjective Ensemble Learning

Qingquan ZHANG, Jialin LIU, Zeqi ZHANG, Junyi WEN, Bifei MAO, Xin YAO

Research output: Journal PublicationsJournal Article (refereed)peer-review

4 Citations (Scopus)

Abstract

In the literature of mitigating unfairness in machine learning (ML), many fairness measures are designed to evaluate predictions of learning models and also utilized to guide the training of fair models. It has been theoretically and empirically shown that there exist conflicts and inconsistencies among accuracy and multiple fairness measures. Optimizing one or several fairness measures may sacrifice or deteriorate other measures. Two key questions should be considered: 1) how to simultaneously optimize accuracy and multiple fairness measures and 2) how to optimize all the considered fairness measures more effectively. In this article, we view the mitigating unfairness problem as a multiobjective learning problem, considering the conflicts among fairness measures. A multiobjective evolutionary learning framework is used to simultaneously optimize several metrics (including accuracy and multiple fairness measures) of ML models. Then, ensembles are constructed based on the learning models in order to automatically balance different metrics. Empirical results on eight well-known datasets demonstrate that compared with the state-of-the-art approaches for mitigating unfairness, our proposed algorithm can provide decision makers with better tradeoffs among accuracy and multiple fairness metrics. Furthermore, the high-quality models generated by the framework can be used to construct an ensemble to automatically achieve a better tradeoff among all the considered fairness metrics than other ensemble methods. © 1997-2012 IEEE.
Original languageEnglish
Pages (from-to)848-862
Number of pages15
JournalIEEE Transactions on Evolutionary Computation
Volume27
Issue number4
Early online date26 Sept 2022
DOIs
Publication statusPublished - Aug 2023
Externally publishedYes

Bibliographical note

This work was supported in part by the Research Institute of Trustworthy Autonomous Systems (RITAS); in part by the Guangdong Provincial Key Laboratory under Grant 2020B121201001; in part by the Program for Guangdong Introducing Innovative and Enterpreneurial Teams under Grant 2017ZT07X386; in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2021A1515011830; and in part by the National Natural Science Foundation of China under Grant 61906083.

Keywords

  • AI ethics
  • ensembles of learning machines
  • fairness in machine learning (ML)
  • fairness measures
  • multiobjective learning

Fingerprint

Dive into the research topics of 'Mitigating Unfairness via Evolutionary Multiobjective Ensemble Learning'. Together they form a unique fingerprint.

Cite this