Nonnegative matrix factorization (NMF) is a wellknown paradigm for data representation. Traditional NMFbased classification methods first perform NMF or one of itsvariants on input data samples to obtain their low-dimensionalrepresentations, which are successively classified by means of atypical classifier (e.g., k-nearest neighbors (KNN) and supportvector machine (SVM)). Such a stepwise manner may overlookthe dependency between the two processes, resulting in thecompromise of the classification accuracy. In this paper, weelegantly unify the two processes by formulating a novel constrained optimization model, namely dual embedding regularizedNMF (DENMF), which is semi-supervised. Our DENMF solutionsimultaneously finds the low-dimensional representations andassignment matrix via joint optimization for better classification.Specifically, input data samples are projected onto a couple oflow-dimensional spaces (i.e., feature and label spaces), and locallylinear embedding is employed to preserve the identical localgeometric structure in different spaces. Moreover, we proposean alternating iteration algorithm to solve the resulting DENMF,whose convergence is theoretically proven. Experimental resultsover five benchmark datasets demonstrate that DENMF canachieve higher classification accuracy than state-of-the-art algorithms.
Bibliographical noteThis work was supported in part by the National Natural Science Foundation of China under Grant 61672443 and Grant 61772344, in part by the Hong Kong RGC General Research Funds under Grant 9042038 (CityU 11205314) and Grant 9042322 (CityU 11200116), and in part by the Hong Kong RGC Early Career Scheme Funds under Grant 9048123 (CityU 21211518).
- Nonnegative matrix factorization
- semi-supervised learning