Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models : A Critical Review and Assessment

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

With the continuous growth in the number of parameters of the Transformer-based pretrained language models (PLMs), particularly the emergence of large language models (LLMs) with billions of parameters, many natural language processing (NLP) tasks have demonstrated remarkable success. However, the enormous size and computational demands of these models pose significant challenges for adapting them to specific downstream tasks, especially in environments with limited computational resources. Parameter-Efficient Fine-Tuning (PEFT) offers an effective solution by reducing the number of fine-tuning parameters and memory usage while achieving comparable performance to full fine-tuning. The demands for fine-tuning PLMs, especially LLMs, have led to a surge in the development of PEFT methods, as depicted in Fig. 1. In this paper, we present a comprehensive and systematic review of PEFT methods for PLMs. We summarize these PEFT methods, discuss their applications, and outline future directions. Furthermore, extensive experiments are conducted using several representative PEFT methods to better understand their effectiveness in parameter efficiency and memory efficiency. By offering insights into the latest advancements and practical applications, this survey serves as an invaluable resource for researchers and practitioners seeking to navigate the challenges and opportunities presented by PEFT in the context of PLMs.
Original languageEnglish
Number of pages26
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Early online date26 Jan 2026
DOIs
Publication statusE-pub ahead of print - 26 Jan 2026

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

Funding

The research described in this article has been supported by a research grant entitled “Medical Text Feature Representations based on Pre-trained Language Models” (871238); the Faculty Research Grants (DB24A4 and SDS24A8), and the Direct Grant (DR25E8) of Lingnan University, Hong Kong; and two grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (R1015-23 and UGC/FDS16/E17/23). (Corresponding author: Haoran Xie.) Lingling Xu is with the School of Science and Technology, Hong Kong Metropolitan University, Hong Kong, and also with the School of Data Science, Lingnan University, Hong Kong (email: [email protected]).

Keywords

  • Parameter-efficient
  • fine-tuning
  • pretrained language model
  • large language model
  • memory usage

Fingerprint

Dive into the research topics of 'Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models : A Critical Review and Assessment'. Together they form a unique fingerprint.

Cite this