Transferability of graph neural networks: An extended graphon approach

We study spectral graph convolutional neural networks (GCNNs), where filters are defined as continuous functions of the graph shift operator (GSO) through functional calculus. A spectral GCNN is not tailored to one specific graph and can be transferred between different graphs. It is hence important...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied and computational harmonic analysis Jg. 63; S. 48 - 83
Hauptverfasser: Maskey, Sohir, Levie, Ron, Kutyniok, Gitta
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.03.2023
Schlagworte:
ISSN:1063-5203, 1096-603X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We study spectral graph convolutional neural networks (GCNNs), where filters are defined as continuous functions of the graph shift operator (GSO) through functional calculus. A spectral GCNN is not tailored to one specific graph and can be transferred between different graphs. It is hence important to study the GCNN transferability: the capacity of the network to have approximately the same repercussion on different graphs that represent the same phenomenon. Transferability ensures that GCNNs trained on certain graphs generalize if the graphs in the test set represent the same phenomena as the graphs in the training set. In this paper, we consider a model of transferability based on graphon analysis. Graphons are limit objects of graphs, and, in the graph paradigm, two graphs represent the same phenomenon if both approximate the same graphon. Our main contributions can be summarized as follows: 1) we prove that any fixed GCNN with continuous filters is transferable under graphs that approximate the same graphon, 2) we prove transferability for graphs that approximate unbounded graphon shift operators, which are defined in this paper, and 3) we obtain non-asymptotic approximation results, proving linear stability of GCNNs. This extends current state-of-the-art results which show asymptotic transferability for polynomial filters under graphs that approximate bounded graphons. •We study the generalization error of graph convolutional neural networks.•Graphs representing the same phenomenon are modeled via graphon analysis.•We show that networks can be transferred between graphs sampling the same phenomenon.•Our analysis allows working with generic continuous filters.•By introducing unbounded graphons, Euclidean CNNs are a special case of our analysis.
ISSN:1063-5203
1096-603X
DOI:10.1016/j.acha.2022.11.008