Balancing privacy and performance: An empirical study of machine unlearning in deep learning models

  • Tazeem AHMAD*
  • , Xiaohui TAO
  • , Jianming YONG
  • , Thanveer SHAIK
  • , Haoran XIE
  • , Yuefeng LI
  • , U. Rajendra ACHARYA
  • *Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

In the field of Artificial Intelligence (AI), the data used to train models may contain private information that could potentially be exposed in the model’s output. Machine Unlearning (MU) has emerged as a promising solution for removing private or obsolete data from trained models, along with their influence, thereby enforcing the “right to be forgotten” under the General Data Protection Regulation (GDPR). However, achieving a balance between privacy guarantee and model performance remains a fundamental challenge. This paper contributes to the field of AI by presenting an empirical evaluation of key families, i.e., data deletion, data perturbation, and model update of MU for privacy preservation, focusing on their impact on both classification accuracy and privacy in deep learning (DL) models. The study assesses changes in the classification performance of the convolutional neural network (CNN) architecture and the long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM) recurrent neural network (RNN) architectures when used with data deletion, data perturbation, and model update families of MU. This study also assesses these architectures’ susceptibility to membership inference attacks (MIA) before and after unlearning on PPG-DaLiA and MHEALTH (Mobile HEALTH) datasets, providing a quantitative measure of privacy leakage. Experimental results show that model update techniques offer more scalable alternatives to data deletion and perturbation, though they introduce varying levels of privacy leakage risk. In doing so, this research highlights the strengths and limitations of current targeted unlearning methods and underscores the need for more efficient and flexible approaches to privacy protection in DL models.
Original languageEnglish
Article number113530
JournalEngineering Applications of Artificial Intelligence
Volume167
Issue numberPart I
Early online date9 Jan 2026
DOIs
Publication statusE-pub ahead of print - 9 Jan 2026

Funding

This paper was partially supported by Grant DP220101360 from the Australian Research Council.

Fingerprint

Dive into the research topics of 'Balancing privacy and performance: An empirical study of machine unlearning in deep learning models'. Together they form a unique fingerprint.

Cite this