Mutual GNN-MLP distillation for robust graph adversarial defense.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
May 16, 2025
Abstract
Current adversarial defenses for graph neural networks (GNNs) face critical limitations that hinder their real-world application: (1) inadequate adaptability to graph heterophily, (2) lack of generalizability to early GNNs like Graph SAmple and aggreGatE (GraphSAGE), and (3) low inference scalability, which is problematic for resource-constrained scenarios. To tackle these issues, we propose the Mutual GNN-MLP distillation (MGMD) framework. MGMD leverages the complementary strengths of GNN and Multi-layer Perceptron (MLP), harnessing the unique advantages of both models, hence enhancing adaptability to graph heterophily and fortifying defenses against structure and/or node feature attacks. Since not intruding GNNs or MLPs, MGMD can seamlessly integrate with simple early GNNs widely used downstream. And the distilled MLP enables extremely high inference scalability. Our decision boundary analysis formally demonstrates MGMD's adversarial robustness and adaptability to graph heterophily. To mitigate potential convergence issues stemming from inductive bias conflicts between heterogeneous MLP and GNN, we introduce a novel learning rate scheduler inspired by our convergence analysis of the involved MLP. Experiments on seven homophilic and three heterophilic graphs demonstrate the effectiveness of the proposed scheduler and the advantages of MGMD over prior methods.