On Extracting Specialized Code Abilities from Large Language Models: A Feasibility Study

  • Zongjie LI
  • , Chaozheng WANG
  • , Pingchuan MA
  • , Chaowei LIU
  • , Shuai WANG*
  • , Daoyuan WU*
  • , Cuiyun GAO
  • , Yang LIU
  • *Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

Abstract

Recent advances in large language models (LLMs) significantly boost their usage in software engineering. However, training a well-performing LLM demands a substantial workforce for data collection and annotation. Moreover, training datasets may be proprietary or partially open, and the process often requires a costly GPU cluster. The intellectual property value of commercial LLMs makes them attractive targets for imitation attacks, but creating an imitation model with comparable parameters still incurs high costs. This motivates us to explore a practical and novel direction: slicing commercial black-box LLMs using medium-sized backbone models. In this paper, we explore the feasibility of launching imitation attacks on LLMs to extract their specialized code abilities, such as 'code synthesis' and 'code translation:' We systematically investigate the effectiveness of launching code ability extraction attacks under different code-related tasks with multiple query schemes, including zero-shot, in-context, and Chain-of-Thought. We also design response checks to refine the outputs, leading to an effective imitation training process. Our results show promising outcomes, demonstrating that with a reasonable number of queries, attackers can train a medium-sized backbone model to replicate specialized code behaviors similar to the target LLMs. We summarize our findings and insights to help researchers better understand the threats posed by imitation attacks, including revealing a practical attack surface for generating adversarial code examples against LLMs.

Original languageEnglish
Title of host publicationProceedings of the IEEE/ACM 46th International Conference on Software Engineering
PublisherAssociation for Computing Machinery, Inc
Pages893-905
Number of pages13
ISBN (Electronic)9798400702174
DOIs
Publication statusPublished - 20 May 2024
Externally publishedYes
Event46th ACM/IEEE International Conference on Software Engineering - Lisbon, Portugal
Duration: 14 Apr 202420 Apr 2024

Conference

Conference46th ACM/IEEE International Conference on Software Engineering
Abbreviated titleICSE 2024
Country/TerritoryPortugal
CityLisbon
Period14/04/2420/04/24

Bibliographical note

Publisher Copyright:
© 2024 ACM.

Funding

The HKUST authors were supported in part by an RGC GRF grant under the contract 16214723, RMGS24EG03, and RMGS24CR01. The HITSZ authors were supported in part by Natural Science Foundation of Guangdong Province under grant No. 2023A1515011959, and Shenzhen Basic Research under grant No. JCYJ20220531095214031. The NTU authors were supported in part by the National Research Foundation, Singapore, and the Cyber Security Agency under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN). Any opinions, findings and conclusions or recommendations expressed in this material do not reflect the views of National Research Foundation, Singapore and Cyber Security Agency of Singapore.

Keywords

  • Large Language Models
  • Imitation Attacks

Fingerprint

Dive into the research topics of 'On Extracting Specialized Code Abilities from Large Language Models: A Feasibility Study'. Together they form a unique fingerprint.

Cite this