Abstract
Decompilers are widely used in reverse engineering (RE) to convert compiled executables into human-readable pseudocode and support various security analysis tasks. Existing decompilers, such as IDA Pro and Ghidra, focus on enhancing the readability of decompiled code rather than its recompilability, which limits further programmatic use, such as for CodeQL-based vulnerability analysis that requires compilable versions of the decompiled code. Recent LLM-based approaches for enhancing decompilation results, while useful for human RE analysts, unfortunately also follow the same path.
In this paper, we explore, for the first time, how off-the-shelf large language models (LLMs) can be used to enable recompilable decompilation—automatically correcting decompiler outputs into compilable versions. We first show that this is non-trivial through a pilot study examining existing rule-based and LLM-based approaches. Based on the lessons learned, we design DecLLM, an iterative LLM-based repair loop that utilizes both static recompilation and dynamic runtime feedback as oracles to iteratively fix decompiler outputs. We test DecLLM on popular C benchmarks and real-world binaries using two mainstream LLMs, GPT-3.5 and GPT-4, and show that off-the-shelf LLMs can achieve an upper bound of around 70% recompilation success rate, i.e., 70 out of 100 originally non-recompilable decompiler outputs are now recompilable. We also demonstrate the practical applicability of the recompilable code for CodeQL-based vulnerability analysis, which is impossible to perform directly on binaries. For the remaining 30% of hard cases, we further delve into their errors to gain insights for future improvements in decompilation-oriented LLM design.
In this paper, we explore, for the first time, how off-the-shelf large language models (LLMs) can be used to enable recompilable decompilation—automatically correcting decompiler outputs into compilable versions. We first show that this is non-trivial through a pilot study examining existing rule-based and LLM-based approaches. Based on the lessons learned, we design DecLLM, an iterative LLM-based repair loop that utilizes both static recompilation and dynamic runtime feedback as oracles to iteratively fix decompiler outputs. We test DecLLM on popular C benchmarks and real-world binaries using two mainstream LLMs, GPT-3.5 and GPT-4, and show that off-the-shelf LLMs can achieve an upper bound of around 70% recompilation success rate, i.e., 70 out of 100 originally non-recompilable decompiler outputs are now recompilable. We also demonstrate the practical applicability of the recompilable code for CodeQL-based vulnerability analysis, which is impossible to perform directly on binaries. For the remaining 30% of hard cases, we further delve into their errors to gain insights for future improvements in decompilation-oriented LLM design.
| Original language | Undefined/Unknown |
|---|---|
| Title of host publication | Proceedings of the ACM on Software Engineering |
| Editors | Luciano BARESI |
| Publisher | Association for Computing Machinery |
| Pages | 1841-1864 |
| Number of pages | 24 |
| Volume | 2 |
| Edition | ISSTA |
| DOIs | |
| Publication status | Published - Jul 2025 |
| Externally published | Yes |
| Event | 34th ACM SIGSOFT International Symposium on Software Testing and Analysis - Trondheim, Norway Duration: 25 Jun 2025 → 28 Jun 2025 |
Publication series
| Name | Proceedings of the ACM on Software Engineering |
|---|---|
| Publisher | Association for Computing Machinery |
| ISSN (Electronic) | 2994-970X |
Symposium
| Symposium | 34th ACM SIGSOFT International Symposium on Software Testing and Analysis |
|---|---|
| Abbreviated title | ISSTA 2025 |
| Country/Territory | Norway |
| City | Trondheim |
| Period | 25/06/25 → 28/06/25 |
Bibliographical note
Acknowledgements:The authors would like to thank the anonymous reviewers for their valuable comments.
Funding
The HKUST authors were supported in part by a NSFC/RGC JRS grant under the contract N_HKUST605/23 and a CCF-Tencent Open Research Fund.
Keywords
- Recompilable Decompilation
- Reverse Engineering
- Large Language Model