A Practical Human Labeling Method for Online Just-in-Time Software Defect Prediction

Liyan SONG, Leandro Lei MINKU*, Cong TENG, Xin YAO*

*Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

2 Citations (Scopus)

Abstract

Just-in-Time Software Defect Prediction (JIT-SDP) can be seen as an online learning problem where additional software changes produced over time may be labeled and used to create training examples. These training examples form a data stream that can be used to update JIT-SDP models in an attempt to avoid models becoming obsolete and poorly performing. However, labeling procedures adopted in existing online JIT-SDP studies implicitly assume that practitioners would not inspect software changes upon a defect-inducing prediction, delaying the production of training examples. This is inconsistent with a real-world scenario where practitioners would adopt JIT-SDP models and inspect certain software changes predicted as defect-inducing to check whether they really induce defects. Such inspection means that some software changes would be labeled much earlier than assumed in existing work, potentially leading to different JIT-SDP models and performance results. This paper aims at formulating a more practical human labeling procedure that takes into account the adoption of JIT-SDP models during the software development process. It then analyses whether and to what extent it would impact the predictive performance of JIT-SDP models. We also propose a new method to target the labeling of software changes with the aim of saving human inspection effort. Experiments based on 14 GitHub projects revealed that adopting a more realistic labeling procedure led to significantly higher predictive performance than when delaying the labeling process, meaning that existing work may have been underestimating the performance of JIT-SDP. In addition, our proposed method to target the labeling process was able to reduce human effort while maintaining predictive performance by recommending practitioners to inspect software changes that are more likely to induce defects. We encourage the adoption of more realistic human labeling methods in research studies to obtain an evaluation of JIT-SDP predictive performance that is closer to reality. © 2023 Owner/Author.
Original languageEnglish
Title of host publicationESEC/FSE '2023 : Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
EditorsSatish CHANDRA, Kelly BLINCOE, Paolo TONELLA
PublisherAssociation for Computing Machinery, Inc
Pages605-617
Number of pages13
ISBN (Print)9798400703270
DOIs
Publication statusPublished - 30 Nov 2023
Externally publishedYes
EventThe 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering - San Francisco, United States
Duration: 3 Dec 20239 Dec 2023

Conference

ConferenceThe 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
Country/TerritoryUnited States
CitySan Francisco
Period3/12/239/12/23

Keywords

  • human inspection
  • human labeling
  • Just-in-time software defect prediction
  • online learning
  • verification latency
  • waiting time

Fingerprint

Dive into the research topics of 'A Practical Human Labeling Method for Online Just-in-Time Software Defect Prediction'. Together they form a unique fingerprint.

Cite this