On the validity of retrospective predictive performance evaluation procedures in just-in-time software defect prediction

Liyan SONG, Leandro L. MINKU, Xin YAO

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Just-In-Time Software Defect Prediction (JIT-SDP) is concerned with predicting whether software changes are defect-inducing or clean. It operates in scenarios where labels of software changes arrive over time with delay, which in part corresponds to the time we wait to label software changes as clean (waiting time). However, clean labels decided based on waiting time may be different from the true labels of software changes, i.e., there may be label noise. This typically overlooked issue has recently been shown to affect the validity of continuous performance evaluation procedures used to monitor the predictive performance of JIT-SDP models during the software development process. It is still unknown whether this issue could potentially also affect evaluation procedures that rely on retrospective collection of software changes such as those adopted in JIT-SDP research studies, affecting the validity of the conclusions of a large body of existing work. We conduct the first investigation of the extent with which the choice of waiting time and its corresponding label noise would affect the validity of retrospective performance evaluation procedures. Based on 13 GitHub projects, we found that the choice of waiting time did not have a significant impact on the validity and that even small waiting times resulted in high validity. Therefore, (1) the estimated predictive performances in JIT-SDP studies are likely reliable in view of different waiting times, and (2) future studies can make use of not only larger (5k+ software changes), but also smaller (1k software changes) projects for evaluating performance of JIT-SDP models. © 2023, The Author(s).
Original languageEnglish
Article number124
JournalEmpirical Software Engineering
Volume28
Issue number5
DOIs
Publication statusPublished - Sept 2023
Externally publishedYes

Bibliographical note

This work was supported by National Natural Science Foundation of China (NSFC) under Grant No. 62002148 and Grant No. 62250710682, Guangdong Provincial Key Laboratory under Grant No. 2020B121201001, the Program for Guangdong Introducing Innovative and Enterpreneurial Teams under Grant No. 2017ZT07X386, and Research Institute of Trustworthy Autonomous Systems (RITAS).

Keywords

  • Concept drift
  • Just-in-time software defect prediction
  • Label noise
  • Online learning
  • Performance evaluation procedures
  • Verification latency

Fingerprint

Dive into the research topics of 'On the validity of retrospective predictive performance evaluation procedures in just-in-time software defect prediction'. Together they form a unique fingerprint.

Cite this