Abstract
Just-in-Time Software Defect Prediction (JIT-SDP) can be seen as an online learning problem where additional software changes produced over time may be labeled and used to create training examples. These training examples form a data stream that can be used to update JIT-SDP models in an attempt to avoid models becoming obsolete and poorly performing. However, labeling procedures adopted in existing online JIT-SDP studies implicitly assume that practitioners would not inspect software changes upon a defect-inducing prediction, delaying the production of training examples. This is inconsistent with a real-world scenario where practitioners would adopt JIT-SDP models and inspect certain software changes predicted as defect-inducing to check whether they really induce defects. Such inspection means that some software changes would be labeled much earlier than assumed in existing work, potentially leading to different JIT-SDP models and performance results. This paper aims at formulating a more practical human labeling procedure that takes into account the adoption of JIT-SDP models during the software development process. It then analyses whether and to what extent it would impact the predictive performance of JIT-SDP models. We also propose a new method to target the labeling of software changes with the aim of saving human inspection effort. Experiments based on 14 GitHub projects revealed that adopting a more realistic labeling procedure led to significantly higher predictive performance than when delaying the labeling process, meaning that existing work may have been underestimating the performance of JIT-SDP. In addition, our proposed method to target the labeling process was able to reduce human effort while maintaining predictive performance by recommending practitioners to inspect software changes that are more likely to induce defects. We encourage the adoption of more realistic human labeling methods in research studies to obtain an evaluation of JIT-SDP predictive performance that is closer to reality. © 2023 Owner/Author.
Original language | English |
---|---|
Title of host publication | ESEC/FSE '2023 : Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering |
Editors | Satish CHANDRA, Kelly BLINCOE, Paolo TONELLA |
Publisher | Association for Computing Machinery, Inc |
Pages | 605-617 |
Number of pages | 13 |
ISBN (Print) | 9798400703270 |
DOIs | |
Publication status | Published - 30 Nov 2023 |
Externally published | Yes |
Event | The 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering - San Francisco, United States Duration: 3 Dec 2023 → 9 Dec 2023 |
Conference
Conference | The 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 3/12/23 → 9/12/23 |
Keywords
- human inspection
- human labeling
- Just-in-time software defect prediction
- online learning
- verification latency
- waiting time