Abstract
Graph neural networks have recently received increasing attention. These methods often map nodes into latent spaces and learn vector representations of the nodes for a variety of downstream tasks. To gain trust and to promote collaboration between AIs and humans, it would be better if those representations were interpretable for humans. However, most explainable AIs focus on a supervised learning setting and aim to answer the following question: 'Why does the model predict y for an input x?'. For an unsupervised learning setting as node embedding, interpretation can be more complicated since the embedding vectors are usually not understandable for humans. On the other hand, nodes and edges in a graph are often associated with texts in many real-world applications. A question naturally arises: could we integrate the human-understandable textural data into graph learning to facilitate interpretable node embedding? In this paper we present interpretable graph neural networks (iGNN), a model to learn textual explanations for node representations modeling the extra information contained in the associated textual data. To validate the performance of the proposed method, we investigate the learned interpretability of the embedding vectors and use functional interpretability to measure it. Experimental results on multiple text-labeled graphs show the effectiveness of the iGNN model on learning textual explanations of node embedding while performing well in downstream tasks. © 2021 IEEE.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Volume | 2021-July |
ISBN (Print) | 9780738133669 |
DOIs | |
Publication status | Published - 18 Jul 2021 |
Externally published | Yes |
Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 766186.
Keywords
- interpretability
- Node embedding
- text mining