Alleviating the over-smoothing of graph neural computing by a data augmentation strategy with entropy preservation

作者:

Highlights:

• We provide a new graph entropy design to measure the smoothness of graph feature manifold and conclude that motif-based information structures determine this graph entropy.

• We propose a novel graph data augmentation strategy that protects not only the integrity of topological structure but also the integrity of motif-based information structures. Our strategy shows advantages in maintaining more graph entropy compared with other methods.

• We demonstrate our model on different datasets, and experiments show that our work is superior in real-world node classification tasks.

• Our approach significantly enhances the robustness of GCN and could alleviate the over-smoothing phenomenon to a certain extend.

摘要

•We provide a new graph entropy design to measure the smoothness of graph feature manifold and conclude that motif-based information structures determine this graph entropy.•We propose a novel graph data augmentation strategy that protects not only the integrity of topological structure but also the integrity of motif-based information structures. Our strategy shows advantages in maintaining more graph entropy compared with other methods.•We demonstrate our model on different datasets, and experiments show that our work is superior in real-world node classification tasks.•Our approach significantly enhances the robustness of GCN and could alleviate the over-smoothing phenomenon to a certain extend.

论文关键词:Graph representation,Graph convolutional networks,Information theory,Graph entropy

论文评审过程:Received 14 July 2021, Revised 9 June 2022, Accepted 31 July 2022, Available online 1 August 2022, Version of Record 5 August 2022.

论文官网地址:https://doi.org/10.1016/j.patcog.2022.108951