Abstract
In the field of Artificial Intelligence (AI), the data used to train models may contain private information that could potentially be exposed in the model’s output. Machine Unlearning (MU) has emerged as a promising solution for removing private or obsolete data from trained models, along with their influence, thereby enforcing the “right to be forgotten” under the General Data Protection Regulation (GDPR). However, achieving a balance between privacy guarantee and model performance remains a fundamental challenge. This paper contributes to the field of AI by presenting an empirical evaluation of key families, i.e., data deletion, data perturbation, and model update of MU for privacy preservation, focusing on their impact on both classification accuracy and privacy in deep learning (DL) models. The study assesses changes in the classification performance of the convolutional neural network (CNN) architecture and the long short-term memory (LSTM) and bidirectional LSTM (Bi-LSTM) recurrent neural network (RNN) architectures when used with data deletion, data perturbation, and model update families of MU. This study also assesses these architectures’ susceptibility to membership inference attacks (MIA) before and after unlearning on PPG-DaLiA and MHEALTH (Mobile HEALTH) datasets, providing a quantitative measure of privacy leakage. Experimental results show that model update techniques offer more scalable alternatives to data deletion and perturbation, though they introduce varying levels of privacy leakage risk. In doing so, this research highlights the strengths and limitations of current targeted unlearning methods and underscores the need for more efficient and flexible approaches to privacy protection in DL models.
| Original language | English |
|---|---|
| Article number | 113530 |
| Journal | Engineering Applications of Artificial Intelligence |
| Volume | 167 |
| Issue number | Part I |
| Early online date | 9 Jan 2026 |
| DOIs | |
| Publication status | E-pub ahead of print - 9 Jan 2026 |
Funding
This paper was partially supported by Grant DP220101360 from the Australian Research Council.