Abstract
Quantifying the uncertainty of supervised learning models plays an important role in making more reliable predictions. Epistemic uncertainty, which usually is due to insufficient knowledge about the model, can be reduced by collecting more data or refining the learning models. Over the last few years, scholars have proposed many epistemic uncertainty handling techniques which can be roughly grouped into two categories, i.e., Bayesian and ensemble. This paper provides a comprehensive review of epistemic uncertainty learning techniques in supervised learning over the last five years. As such, we, first, decompose the epistemic uncertainty into bias and variance terms. Then, a hierarchical categorization of epistemic uncertainty learning techniques along with their representative models is introduced. In addition, several applications such as computer vision (CV) and natural language processing (NLP) are presented, followed by a discussion on research gaps and possible future research directions.
Original language | English |
---|---|
Pages (from-to) | 449-465 |
Number of pages | 17 |
Journal | Neurocomputing |
Volume | 489 |
Early online date | 24 Dec 2021 |
DOIs | |
Publication status | Published - 7 Jun 2022 |
Externally published | Yes |
Bibliographical note
This work was supported in part by the National Natural Science Foundation of China (Grants 61976141, 62176160 and 61732011), in part by the National Key R&D Program of China (Grant 2021YFE0203700), in part by the Natural Science Foundation of Shenzhen (University Stability Support Program No. 20200804193857002), and in part by the Interdisciplinary Innovation Team of Shenzhen University.Keywords
- Bayesian approximation
- Computer vision
- Ensemble learning
- Epistemic uncertainty learning
- Natural language processing
- Supervised learning