DecLLM: LLM-Augmented Recompilable Decompilation for Enabling Programmatic Use of Decompiled Code

Wai Kin WONG, Daoyuan WU*, Huaijin WANG, Zongjie LI, Zhibo LIU*, Shuai WANG*, Qiyi TANG, Sen NIE, Shi WU

*Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

Abstract

Decompilers are widely used in reverse engineering (RE) to convert compiled executables into human-readable pseudocode and support various security analysis tasks. Existing decompilers, such as IDA Pro and Ghidra, focus on enhancing the readability of decompiled code rather than its recompilability, which limits further programmatic use, such as for CodeQL-based vulnerability analysis that requires compilable versions of the decompiled code. Recent LLM-based approaches for enhancing decompilation results, while useful for human RE analysts, unfortunately also follow the same path.
In this paper, we explore, for the first time, how off-the-shelf large language models (LLMs) can be used to enable recompilable decompilation—automatically correcting decompiler outputs into compilable versions. We first show that this is non-trivial through a pilot study examining existing rule-based and LLM-based approaches. Based on the lessons learned, we design DecLLM, an iterative LLM-based repair loop that utilizes both static recompilation and dynamic runtime feedback as oracles to iteratively fix decompiler outputs. We test DecLLM on popular C benchmarks and real-world binaries using two mainstream LLMs, GPT-3.5 and GPT-4, and show that off-the-shelf LLMs can achieve an upper bound of around 70% recompilation success rate, i.e., 70 out of 100 originally non-recompilable decompiler outputs are now recompilable. We also demonstrate the practical applicability of the recompilable code for CodeQL-based vulnerability analysis, which is impossible to perform directly on binaries. For the remaining 30% of hard cases, we further delve into their errors to gain insights for future improvements in decompilation-oriented LLM design.
Original languageUndefined/Unknown
Title of host publicationProceedings of the ACM on Software Engineering
EditorsLuciano BARESI
PublisherAssociation for Computing Machinery
Pages1841-1864
Number of pages24
Volume2
EditionISSTA
DOIs
Publication statusPublished - Jul 2025
Externally publishedYes
Event34th ACM SIGSOFT International Symposium on Software Testing and Analysis - Trondheim, Norway
Duration: 25 Jun 202528 Jun 2025

Publication series

NameProceedings of the ACM on Software Engineering
PublisherAssociation for Computing Machinery
ISSN (Electronic)2994-970X

Symposium

Symposium34th ACM SIGSOFT International Symposium on Software Testing and Analysis
Abbreviated titleISSTA 2025
Country/TerritoryNorway
CityTrondheim
Period25/06/2528/06/25

Bibliographical note

Acknowledgements:
The authors would like to thank the anonymous reviewers for their valuable comments.

Funding

The HKUST authors were supported in part by a NSFC/RGC JRS grant under the contract N_HKUST605/23 and a CCF-Tencent Open Research Fund.

Keywords

  • Recompilable Decompilation
  • Reverse Engineering
  • Large Language Model

Cite this