Hierarchical Dirichlet Process (HDP) has attracted much attention in the research community of natural language processing. Given a corpus, HDP is able to determine the number of topics automatically, possessing an important feature dubbed nonparametric that overcomes the challenging issue of manually specifying a suitable topic number in parametric topic models, such as Latent Dirichlet Allocation (LDA). Nevertheless, HDP requires a much higher computational cost than LDA for parameter estimation. By taking the advantage of multi-threading, a parallel Gibbs sampling algorithm is proposed to estimate parameters for HDP based on the equivalence between HDP and Gamma-Gamma Poisson Process (G2PP) in terms of the generative process. Unfortunately, the above parallel Gibbs sampling algorithm requires to apply the finite approximation on the number of topics manually (i.e., predefine the topic number), thus can not retain the nonparametric feature of HDP. Another drawback of the above models is the lack of capturing the semantic dependencies between words, because the topic assignment of words is independent with each other. Although some works have been done in phrase-based topic modelling, these existing methods are still limited by either enforcing the entire phrase to share a common topic or requiring much complex and time-consuming phrase mining methods. In this paper, we aim to develop a copula guided parallel Gibbs sampling algorithm for HDP which can adjust the number of topics dynamically and capture the latent semantic dependencies between words that compose a coherent segment. Extensive experiments on real-world datasets indicate that our method achieves low perplexities and high topic coherence scores with a small time cost. In addition, we validate the effectiveness of our method on the modelling of word semantic dependencies by comparing the extracted topical phrases with those learned by state-of-the-art phrase-based baselines.
|Number of pages||16|
|Journal||IEEE Transactions on Knowledge and Data Engineering|
|Early online date||28 Feb 2020|
|Publication status||Published - 28 Feb 2020|
Bibliographical noteThe authors are thankful to the reviewers for their constructive comments and suggestions. This work was supported in part by the National Natural Science Foundation of China under Grant 61972426, in part by the Interdisciplinary Research Scheme of the Dean’s Research Fund 2018- 19 under Grant FLASS/DRF/IDS-3, in part by the Departmental Collaborative Research Fund 2019 of The Education University of Hong Kong under Grant MIT/DCRF-R2/18-19, in part by the HKIBS Research Seed Fund 2019/20 of Lingnan University, Hong Kong under Grant 190-009, in part by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China under Grant UGC/FDS16/E01/19, in part by the RGC of the Hong Kong SAR under Grants CityU 11507219 and CityU 11525716, in part by the NSFC Basic Research Program under Grant 71671155 and the CityU Shenzhen Research Institute. The work of J. Yin was supported by the National Natural Science Foundation of China under Grants U1711262, U1611264, U1711261, U1811261, U1811264, and U1911203. The work of Q. Li was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (Collaborative Research Fund) under Grant C1031-18G, and an internal research grant from the Hong Kong Polytechnic University under Grant 1.9B0V.
- Topic modelling
- Parallel Gibbs sampling