TACN: A Topical Adversarial Capsule Network for textual network embedding

Xiaorui QIN, Yanghui RAO*, Haoran XIE, Jiahai WANG, Fu Lee WANG

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

1 Citation (Scopus)


Combining topological information and attributed information of nodes in networks effectively is a valuable task in network embedding. Nevertheless, many prior network embedding methods regarded attributed information of nodes as simple attribute sets or ignored them totally. In some scenarios, the hidden information contained in vertex attributes are essential to network embedding. For instance, networks that contain vertexes with text information play an increasingly important role in our life, including citation networks, social networks, and entry networks. In these textual networks, the latent topic relevance information of different vertexes contained in textual attributes information are valuable in the network analysis process. Shared latent topics of nodes in networks may influence the interaction between them, which is critical to network embedding. However, much prior work for textual network embedding only regarded the text information as simple word sets while ignored the embedded topic information. In this paper, we develop a model named Topical Adversarial Capsule Network (TACN) for textual network embedding, which extracts a low-dimensional latent space of the original network from node structures, vertex attributes, and topic information contained in text of nodes. The proposed TACN contains three parts. The first part is an embedding model, which extracts the embedding representation from the topological structure, vertex attributes, and document-topic distributions. To ensure a consistent training process by back-propagation, we generate document-topic distributions by the neural topic model with Gaussian Softmax constructions. The second part is a prediction model, which is used to exploit labels of vertices. In the third part, an adversarial capsule model is used to help distinguish the latent representations from node structure domain, vertex attribute domain, or document-topic distribution domain. The latent representations, which may come from the three domains, are the output of the embedding model. We incorporate the adversarial idea into the adversarial capsule model to combine the information from these three domains, rather than to distinguish the representations conventionally. Experiments on seven real-world datasets validate the effectiveness of our method.
Original languageEnglish
Pages (from-to)766-777
Number of pages12
JournalNeural Networks
Early online date6 Oct 2021
Publication statusPublished - Dec 2021

Bibliographical note

Funding Information:
The authors are thankful to the reviewers for their constructive comments and suggestions. The work was supported by the National Natural Science Foundation of China (61972426, 62072483), Guangdong Basic and Applied Basic Research Foundation (2020A1515010536), the Faculty Research Grants (DB21B6 and DB21A9) of Lingnan University, Hong Kong, and a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (UGC/FDS16/E01/19).

Publisher Copyright:
© 2021 Elsevier Ltd


  • Capsule Network
  • Document-topic distribution
  • Generative Adversarial Network
  • Textual network embedding


Dive into the research topics of 'TACN: A Topical Adversarial Capsule Network for textual network embedding'. Together they form a unique fingerprint.

Cite this