How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks?

Jun Zhuang, Mohammad Al Hasan

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

In recent years, it has been shown that, compared to other contemporary machine learning models, graph convolutional networks (GCNs) achieve superior performance on the node classification task. However, two potential issues threaten the robustness of GCNs, label scarcity and adversarial attacks.Intensive studies aim to strengthen the robustness of GCNs from three perspectives, the self-supervision-based method, the adversarial-based method, and the detection-based method. Yet, all of the above-mentioned methods can barely handle both issues simultaneously. In this paper, we hypothesize noisy supervision as a kind of self-supervised learning method and then propose a novel Bayesian graph noisy self-supervision model, namely GraphNS, to address both issues. Extensive experiments demonstrate that GraphNS can significantly enhance node classification against both label scarcity and adversarial attacks. This enhancement proves to be generalized over four classic GCNs and is superior to the competing methods across six public graph datasets.

Original languageEnglish
Pages (from-to)2997-3018
Number of pages22
JournalNeural Processing Letters
Volume54
Issue number4
DOIs
StatePublished - Aug 2022

Keywords

  • Bayesian inference
  • Defense of graph convolutional networks
  • Node classification
  • Noisy Supervision

Fingerprint

Dive into the research topics of 'How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks?'. Together they form a unique fingerprint.

Cite this