Digraph Contrastive Learning

Illustration of DiGCL model

Abstract

Graph Contrastive Learning (GCL) has emerged to learn generalizable representations from contrastive views. However, it is still in its infancy with two concerns: 1) changing the graph structure through data augmentation to generate contrastive views may mislead the message passing scheme, as such graph changing action deprives the intrinsic graph structural information, especially the directional structure in digraphs; 2) since GCL usually generates a limited number of contrastive views, it does not take full advantage of the contrastive information provided by data augmentation, resulting in incomplete structure information for models learning. In this paper, we design a digraph data augmentation method called Laplacian perturbation and theoretically analyze how it provides contrastive information without changing the digraph structure. Moreover, we present the multi-view digraph contrastive learning framework, which learns from all possible contrastive views generated by Laplacian perturbation. Then we train it using multi-task curriculum learning to progressively learn from multiple easy-to-hard contrastive views. We empirically show that our model can retain more structural features of digraphs than other GCL models because of its ability to provide complete contrastive information. Experiments on various benchmarks reveal our dominance over the state-of-the-art approaches.

Publication
Thirty-fifth Conference on Neural Information Processing Systems