TY - JOUR
T1 - Fuzzy inference system with interpretable fuzzy rules: Advancing explainable artificial intelligence for disease diagnosis—A comprehensive review
AU - CAO, Jin
AU - ZHOU, Ta
AU - ZHI, Shaohua
AU - LAM, Saikit
AU - REN, Ge
AU - ZHANG, Yuanpeng
AU - WANG, Yongqiang
AU - DONG, Yanjing
AU - CAI, Jing
PY - 2024/3
Y1 - 2024/3
N2 - Interpretable artificial intelligence (AI), also known as explainable AI, is indispensable in establishing trustable AI for bench-to-bedside translation, with substantial implications for human well-being. However, the majority of existing research in this area has centered on designing complex and sophisticated methods, regardless of their interpretability. Consequently, the main prerequisite for implementing trustworthy AI in medical domains has not been met. Scientists have developed various explanation methods for interpretable AI. Among these methods, fuzzy rules embedded in a fuzzy inference system (FIS) have emerged as a novel and powerful tool to bridge the communication gap between humans and advanced AI machines. However, there have been few reviews of the use of FISs in medical diagnosis. In addition, the application of fuzzy rules to different kinds of multimodal medical data has received insufficient attention, despite the potential use of fuzzy rules in designing appropriate methodologies for available datasets. This review provides a fundamental understanding of interpretability and fuzzy rules, conducts comparative analyses of the use of fuzzy rules and other explanation methods in handling three major types of multimodal data (i.e., sequence signals, medical images, and tabular data), and offers insights into appropriate fuzzy rule application scenarios and recommendations for future research.
AB - Interpretable artificial intelligence (AI), also known as explainable AI, is indispensable in establishing trustable AI for bench-to-bedside translation, with substantial implications for human well-being. However, the majority of existing research in this area has centered on designing complex and sophisticated methods, regardless of their interpretability. Consequently, the main prerequisite for implementing trustworthy AI in medical domains has not been met. Scientists have developed various explanation methods for interpretable AI. Among these methods, fuzzy rules embedded in a fuzzy inference system (FIS) have emerged as a novel and powerful tool to bridge the communication gap between humans and advanced AI machines. However, there have been few reviews of the use of FISs in medical diagnosis. In addition, the application of fuzzy rules to different kinds of multimodal medical data has received insufficient attention, despite the potential use of fuzzy rules in designing appropriate methodologies for available datasets. This review provides a fundamental understanding of interpretability and fuzzy rules, conducts comparative analyses of the use of fuzzy rules and other explanation methods in handling three major types of multimodal data (i.e., sequence signals, medical images, and tabular data), and offers insights into appropriate fuzzy rule application scenarios and recommendations for future research.
KW - Disease diagnosis
KW - Explainable artificial intelligence
KW - Fuzzy inference system
KW - Fuzzy rule
KW - Interpretability
UR - http://www.scopus.com/inward/record.url?scp=85184772835&partnerID=8YFLogxK
U2 - 10.1016/j.ins.2024.120212
DO - 10.1016/j.ins.2024.120212
M3 - Journal Article (refereed)
AN - SCOPUS:85184772835
SN - 0020-0255
VL - 662
JO - Information Sciences
JF - Information Sciences
M1 - 120212
ER -