Abstract
Graph Neural Networks (GNNs) have made significant strides in the analysis and modeling of complex network data, particularly excelling in graph and node classification tasks. However, the ”black-box” nature of GNNs impedes user understanding and trust, thereby restricting their broader application. This challenge has spurred a growing focus on demystifying GNNs to make their decision-making processes more transparent. Traditional methods for explaining GNNs often rely on selecting subgraphs and employing combinatorial optimization to generate understandable outputs. However, these methods are closely linked to the inherent complexity of GNNs, leading to higher explanation costs. To address this issue, we introduce a lower-complexity proxy model to explain GNNs. Our approach leverages knowledge distillation with inter-layer alignment, specifically targeting the challenge of over-smoothing and its detrimental impact on model explanation. Initially, we distill critical insights from complex GNN models into a more manageable proxy model. We then apply an inter-layer alignment-based distillation technique to ensure alignment between the proxy and the original model, facilitating the extraction of node or edge-level explanations within the proxy framework. We theoretically prove that the explanations derived from the proxy model are faithful to both the proxy and the original model. Additionally, we show that the upper bound of unfaithfulness between the proxy and the original model remains consistent when the distillation error is infinitesimal. This inter-layer alignment knowledge distillation technique enables the proxy model to retain the knowledge learning and topological representation capabilities of the original model to the greatest extent. Experimental evaluations on numerous real-world datasets confirm the effectiveness of our method, demonstrating robust performance.
| Original language | English |
|---|---|
| Number of pages | 17 |
| Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
| DOIs | |
| Publication status | E-pub ahead of print - 17 Dec 2025 |
Bibliographical note
Publisher Copyright:© 1979-2012 IEEE.
Funding
This work is supported by the National Natural Science Foundation of China (62221005, 62136002, 12201089), the Innovation Projects for Studying Abroad and Returning to China (cx2023097).
Keywords
- Graph Neural Network
- Explanation
- Knowledge Distillation
- Inter-Layer Alignment