Differentiation-Based Extraction of Proprietary Data from Fine-Tuned LLMs

  • Zongjie LI
  • , Daoyuan WU*
  • , Shuai WANG*
  • , Zhendong SU
  • *Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

2 Citations (Scopus)

Abstract

The increasing demand for domain-specific and human-aligned Large Language Models (LLMs) has led to the widespread adoption of Supervised Fine-Tuning (SFT) techniques. SFT datasets often comprise valuable instruction-response pairs, making them highly valuable targets for potential extraction. This paper studies this critical research problem for the first time. We start by formally defining and formulating the problem, then explore various attack goals, types, and variants based on the unique properties of SFT data in real-world scenarios. Based on our analysis of extraction behaviors of direct extraction, we develop a novel extraction method specifically designed for SFT models, called Differentiated Data Extraction (DDE), which exploits the confidence levels of fine-tuned models and their behavioral differences from pre-trained base models. Through extensive experiments across multiple domains and scenarios, we demonstrate the feasibility of SFT data extraction using DDE. Our results show that DDE consistently outperforms existing extraction baselines in all attack settings. To counter this new attack, we propose a defense mechanism that mitigates DDE attacks with minimal impact on model performance. Overall, our research reveals hidden data leak risks in fine-tuned LLMs and provides insights for developing more secure models.

Original languageEnglish
Title of host publicationCCS '25: Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
EditorsChun-Ying HUANG, Jyh-Cheng CHEN, Shiuhpyng SHIEH
PublisherAssociation for Computing Machinery, Inc
Pages3071-3085
Number of pages15
ISBN (Electronic)9798400715259
DOIs
Publication statusPublished - 22 Nov 2025
Event32nd ACM SIGSAC Conference on Computer and Communications Security - Taipei, Taiwan, China
Duration: 13 Oct 202517 Oct 2025

Conference

Conference32nd ACM SIGSAC Conference on Computer and Communications Security
Abbreviated titleCCS 2025
Country/TerritoryTaiwan, China
CityTaipei
Period13/10/2517/10/25

Bibliographical note

Publisher Copyright:
© 2025 Copyright held by the owner/author(s).

Funding

The HKUST authors are supported in part by a RGC CRF grant under the contract C6015-23G and research fund provided by HSBC.

Keywords

  • Data Extraction
  • Large Language Model

Fingerprint

Dive into the research topics of 'Differentiation-Based Extraction of Proprietary Data from Fine-Tuned LLMs'. Together they form a unique fingerprint.

Cite this