侯静怡, 唐宇鑫, 于欣波, 刘志杰. 利用变分卷积推断局部拓扑结构的图表示方法[J]. 工程科学学报, 2023, 45(10): 1750-1758. DOI: 10.13374/j.issn2095-9389.2022.07.24.005
引用本文: 侯静怡, 唐宇鑫, 于欣波, 刘志杰. 利用变分卷积推断局部拓扑结构的图表示方法[J]. 工程科学学报, 2023, 45(10): 1750-1758. DOI: 10.13374/j.issn2095-9389.2022.07.24.005
HOU Jingyi, TANG Yuxin, YU Xinbo, LIU Zhijie. Inferring local topology via variational convolution for graph representation[J]. Chinese Journal of Engineering, 2023, 45(10): 1750-1758. DOI: 10.13374/j.issn2095-9389.2022.07.24.005
Citation: HOU Jingyi, TANG Yuxin, YU Xinbo, LIU Zhijie. Inferring local topology via variational convolution for graph representation[J]. Chinese Journal of Engineering, 2023, 45(10): 1750-1758. DOI: 10.13374/j.issn2095-9389.2022.07.24.005

利用变分卷积推断局部拓扑结构的图表示方法

Inferring local topology via variational convolution for graph representation

  • 摘要: 深度学习技术的长足发展与数据算力的快速提升,极大地增加了各种结构图神经网络优化和实现的可行性,使得图结构数据的表示研究工作取得极大进展。已有的图神经网络方法主要关注图节点之间全局信息的传递,理论上可证明其强大的信息表示能力。然而,面向局部拓扑具有特殊语义的图结构数据表示时,这些通用方法缺乏灵活的局部结构表示机制,例如化学反应中组成分子的局部结构—官能团,其通常能够决定化学分子性质并且参与化学反应过程。进一步挖掘这些局部结构的信息对基于图表示的各类任务都是非常重要的,为此提出一个利用变分卷积推断局部拓扑结构的图表示方法,不仅考虑图节点在全局结构上的关系推理与信息传递,还基于变分推断自适应地学习图数据的局部拓扑结构,利用卷积操作对局部结构进行编码,从而进一步提高图神经网络的表达能力。本文工作在多个图结构数据集上进行实验,实验结果表明利用局部结构信息可以有效提升图神经网络在基于图的相关任务上的性能。

     

    Abstract: The development of deep learning techniques and support of big data computing power have revolutionized graph representation research by facilitating the implementation of the learning of different graph neural network structures. Existing methods, such as graph attention networks, mainly focus on global information propagation in graph neural networks, which have theoretically proven their strong representation capability. However, these general methods lack flexible representation mechanisms when facing graph data with local topology involving specific semantics, such as functional groups in the chemical reaction. Accordingly, it is of great importance to further exploit the local structure representations for graph-based tasks. Several existing methods either use domain expert knowledge or conduct subgraph isomorphism counting to learn local topology representations of graphs. However, there is no guarantee that these methods can easily be generalized to different domains without specific knowledge or complex substructure preprocessing. In this study, we propose a simple and automatic local topology inference method that uses variational convolutions to improve the local representation ability of graph attention networks. The proposed method not only considers the relationship reasoning and message passing on the global graph structure but also adaptively learns the graph’s local structure representations with the guidance of statistical priors that can be readily accessible. To be more specific, the variational inference is used to adaptively learn the convolutional template size, and the inference is conducted layer-by-layer with the guidance of the statistical priors to make the convolutional template size adaptable to multiple subgraphs with different structures in a self-supervised way. The variational convolution module is easily pluggable and can be concatenated with arbitrary hidden layers of any graph neural network. In contrast, due to the locality of the convolution operations, the relations between graph nodes can be further sparse to alleviate the over-squeezing problem in the global information propagation of the graph neural network. As a result, the proposed method can significantly improve the overall representation ability of the graph attention network using the variational inference of the convolutional operations for local topology representation. Experiments are conducted on three large-scale and publicly available datasets, i.e., the OGBG-MolHIV, USPTO, and Buchwald-Hartwig datasets. Experimental results show that exploiting various kinds of local topological information helps improve the performance of the graph attention network.

     

/

返回文章
返回