Probabilistic classification vector machines

Huanhuan CHEN, Peter TIŇO, Xin YAO

Research output: Journal PublicationsJournal Article (refereed)peer-review

118 Citations (Scopus)


In this paper, a sparse learning algorithm, probabilistic classification vector machines (PCVMs), is proposed. We analyze relevance vector machines (RVMs) for classification problems and observe that adopting the same prior for different classes may lead to unstable solutions. In order to tackle this problem, a signed and truncated Gaussian prior is adopted over every weight in PCVMs, where the sign of prior is determined by the class label, i.e., +1 or -1. The truncated Gaussian prior not only restricts the sign of weights but also leads to a sparse estimation of weight vectors, and thus controls the complexity of the model. In PCVMs, the kernel parameters can be optimized simultaneously within the training algorithm. The performance of PCVMs is extensively evaluated on four synthetic data sets and 13 benchmark data sets using three performance metrics, error rate (ERR), area under the curve of receiver operating characteristic (AUC), and root mean squared error (RMSE). We compare PCVMs with soft-margin support vector machines (SVMPCVM), hard-margin support vector machines (SVMHard), SVM with the kernel parameters optimized by PCVMs (SVMPCVM), relevance vector machines (RVMs), and some other baseline classifiers. Through five replications of twofold cross-validation F test, i.e., 5 × 2 cross-validation F test, over single data sets and Friedman test with the corresponding post-hoc test to compare these algorithms over multiple data sets, we notice that PCVMs outperform other algorithms, including SVMSoft, SVMHard, RVM, and SVMPCVM, on most of the data sets under the three metrics, especially under AUC. Our results also reveal that the performance of SVMPCVM is slightly better than SVMSoft, implying that the parameter optimization algorithm in PCVMs is better than cross validation in terms of performance and computational complexity. In this paper, we also discuss the superiority of PCVMs' formulation using maximum a posterior (MAP) analysis and margin analysis, which explain the empirical success of PCVMs. © 2009 IEEE.
Original languageEnglish
Pages (from-to)901-914
Number of pages14
JournalIEEE Transactions on Neural Networks
Issue number6
Early online date24 Apr 2009
Publication statusPublished - Jun 2009
Externally publishedYes

Bibliographical note

Dr. Chen is the recipient of the Value in People (VIP) award from The Wellcome Trust (2009), Dorothy Hodgkin Postgraduate Award (DHPA) from EPSRC (2004), and the Student Travel Grant for the 2006 Congress on Evolutionary Computation (CEC06). Huanhuan Chen (S’06–M’07) received the B.Sc. degree in electrical engineering from the University of Science and Technology of China, Hefei, China, in 2004 and Ph.D. degree, sponsored by Dorothy Hodgkin Postgraduate Award (DHPA), in computer science from University of Birmingham, Birmingham, U.K., in 2008.


  • Bayesian classification
  • Machine learning
  • Probabilistic classification model
  • Support vector machine


Dive into the research topics of 'Probabilistic classification vector machines'. Together they form a unique fingerprint.

Cite this