Abstract
Probabilistic classification vector machine (PCVM) is a sparse learning approach aiming to address the stability problems of relevance vector machine for classification problems. Because PCVM is based on the expectation maximization algorithm, it suffers from sensitivity to initialization, convergence to local minima, and the limitation of Bayesian estimation making only point estimates. Another disadvantage is that PCVM was not efficient for large data sets. To address these problems, this paper proposes an efficient PCVM (EPCVM) by sequentially adding or deleting basis functions according to the marginal likelihood maximization for efficient training. Because of the truncated prior used in EPCVM, two approximation techniques, i.e., Laplace approximation and expectation propagation (EP), have been used to implement EPCVM to obtain full Bayesian solutions. We have verified Laplace approximation and EP with a hybrid Monte Carlo approach. The generalization performance and computational effectiveness of EPCVM are extensively evaluated. Theoretical discussions using Rademacher complexity reveal the relationship between the sparsity and the generalization bound of EPCVM. © 2013 IEEE.
Original language | English |
---|---|
Article number | 6582514 |
Pages (from-to) | 356-369 |
Number of pages | 14 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 25 |
Issue number | 2 |
Early online date | 16 Aug 2013 |
DOIs | |
Publication status | Published - Feb 2014 |
Externally published | Yes |
Keywords
- Bayesian classification
- efficient probabilistic classification model
- expectation propagation (EP)
- incremental learning
- Laplace approximation
- support vector machine (SVM)