Abstract
The scalability problem in data mining involves the development of methods for handling large databases with limited computational resources such as memory and computation time. In this paper, two scalable clustering algorithms, bEMADS and gEMADS, are presented based on the Gaussian mixture model. Both summarize data into subclusters and then generate Gaussian mixtures from their data summaries. Their core algorithm, EMADS, is defined on data summaries and approximates the aggregate behavior of each subcluster of data under the Gaussian mixture model. EMADS is provably convergent. Experimental results substantiate that both algorithms can run several orders of magnitude faster than expectation-maximization with little loss of accuracy.
Original language | English |
---|---|
Pages (from-to) | 1710-1719 |
Number of pages | 10 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 27 |
Issue number | 11 |
DOIs | |
Publication status | Published - 1 Jan 2015 |
Bibliographical note
This work was submitted when Dr. Jin was with Lingnan University, Hong Kong. It appeared previously in the Proceedings Third IEEE Int’l Conf. Data Mining ’03 [5]. The authors would like to thank the editor and the reviewers for their constructive comments and suggestions, and thank T. Zhang, R. Ramakrishnan, M. Livny, and V. Ganti for their BIRCH code.Funding
The work was partially supported by RGC Grants CUHK 4212/01E and LU 3009/02E of Hong Kong.
Keywords
- Scalable clustering; Gaussian mixture model; expectation-maximization; data summary; maximum penalized likelihood estimate