Skip to main navigation Skip to search Skip to main content

Procedural Fairness in Machine Learning

  • Ziming WANG
  • , Changwu HUANG*
  • , Ke TANG*
  • , Xin YAO
  • *Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Fairness in machine learning (ML) has garnered significant attention. However, current research has mainly concentrated on the distributive fairness of ML models, with limited focus on another dimension of fairness, i.e., procedural fairness. In this paper, we first define the procedural fairness of ML models by drawing from the established understanding of procedural fairness in philosophy and psychology fields, and then give formal definitions of individual and group procedural fairness. Based on the proposed definition, we further propose a novel metric to evaluate the group procedural fairness of ML models, called GPFFAE, which utilizes a widely used explainable artificial intelligence technique, namely feature attribution explanation (FAE), to capture the decision process of ML models. We validate the effectiveness of GPFFAE on a synthetic dataset and eight real-world datasets. Our experimental studies have revealed the relationship between procedural and distributive fairness of ML models. After validating the proposed metric for assessing the procedural fairness of ML models, we then propose a method for identifying the features that lead to the procedural unfairness of the model and propose two methods to improve procedural fairness based on the identified unfair features. Our experimental results demonstrate that we can accurately identify the features that lead to procedural unfairness in the ML model, and both of our proposed methods can significantly improve procedural fairness while also improving distributive fairness, with a slight sacrifice on the model performance.

Original languageEnglish
Article number20
Number of pages30
JournalJournal of Artificial Intelligence Research
Volume85
DOIs
Publication statusPublished - Feb 2026

Bibliographical note

Publisher Copyright:
© 2026 Copyright held by the owner/author(s).

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62250710682), the Guangdong Provincial Key Laboratory (Grant No. 2020B121201001), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (Grant No. 2017ZT07X386), and an internal grant of Lingnan University.

Keywords

  • machine learning

Fingerprint

Dive into the research topics of 'Procedural Fairness in Machine Learning'. Together they form a unique fingerprint.

Cite this