Knowledge Vault 6 /54 - ICML 2020
Benchmarking Graph Neural Networks
Xavier Bresson
< Resume Image >

Concept Graph & Resume using Claude 3.5 Sonnet | Chat GPT4o | Llama 3:

graph LR classDef main fill:#f9d4f9, font-weight:bold, font-size:14px classDef basics fill:#f9d4d4, font-weight:bold, font-size:14px classDef architecture fill:#d4f9d4, font-weight:bold, font-size:14px classDef expressivity fill:#d4d4f9, font-weight:bold, font-size:14px classDef benchmarking fill:#f9f9d4, font-weight:bold, font-size:14px classDef future fill:#d4f9f9, font-weight:bold, font-size:14px Main[Benchmarking Graph Neural
Networks] --> A[GNN Basics] Main --> B[GNN Architectures] Main --> C[Expressivity and WL Tests] Main --> D[Benchmarking and Performance] Main --> E[Future Directions] A --> A1[GNNs analyze graphs in various
applications 1] A --> A2[Benchmarking crucial for GNN development 2] A --> A3[MPGCNs: permutation-invariant,
size-independent, locality-preserving networks 3] A --> A4[Isotropic vs anisotropic GCNs neighbor
treatment 4] A --> A5[Batch normalization, residual connections improve
GCNs 5] A --> A6[Sparsity, normalization, residual connections: GCN
essentials 26] B --> B1[GIN matches WL test expressivity 7] B --> B2[Structural encodings cant differentiate isomorphic
nodes 16] B --> B3[Positional encodings break structural symmetry 17] B --> B4[Good encodings: unique, distance-sensitive, non-canonical 18] B --> B5[Laplacian encodings: hybrid structural-positional representation 19] B --> B6[Random sign flips ensure eigenvector
independence 20] C --> C1[WL test inspires expressive GNNs 6] C --> C2[Higher-order WL tests increase expressivity 8] C --> C3[Equivariant GNNs face practical limitations 9] C --> C4[Recent work: 3WL-expressive GNNs, reduced
complexity 10] C --> C5[Expressive link prediction needs joint
representation 23] C --> C6[Improving WL techniques efficiency, maintaining
expressivity 29] D --> D1[Benchmarks: representative, realistic, medium-large datasets 11] D --> D2[Datasets for various graph tasks
introduced 12] D --> D3[Consistent experimental settings ensure fair
comparisons 13] D --> D4[Message passing GCNs outperform WL
GNNs 14] D --> D5[Anisotropic mechanisms improve isotropic GCNs 15] D --> D6[Laplacian encodings improve structured graph
performance 21] E --> E1[GCNs may fail in link prediction 22] E --> E2[Edge representations enhance link prediction
performance 24] E --> E3[Message passing GCNs outperform WL
GNNs 25] E --> E4[Anisotropic mechanisms improve isotropic GCNs
practically 27] E --> E5[Laplacian eigenvectors outperform simple positional
encodings 28] E --> E6[Future: match theory with performance
through benchmarking 30] class Main main class A,A1,A2,A3,A4,A5,A6 basics class B,B1,B2,B3,B4,B5,B6 architecture class C,C1,C2,C3,C4,C5,C6 expressivity class D,D1,D2,D3,D4,D5,D6 benchmarking class E,E1,E2,E3,E4,E5,E6 future

Resume:

1.- Graph Neural Networks (GNNs) have become standard for analyzing graph data, with applications in chemistry, physics, recommender systems, and more.

2.- Benchmarking is crucial to track progress and develop powerful GNNs for real-world adoption of graph deep learning.

3.- Message Passing Graph Convolutional Neural Networks (MPGCNs) are popular GNNs, designed to be permutation-invariant, size-independent, and locality-preserving.

4.- Isotropic GCNs treat all neighbors equally, while anisotropic GCNs can differentiate between neighbors using edge features or learned mechanisms.

5.- GCNs benefit from batch normalization and residual connections, improving learning speed and generalization.

6.- Weisfeiler-Lehman (WL) test is used to check graph non-isomorphism, inspiring GNNs designed to match its expressivity.

7.- Graph Isomorphism Network (GIN) is designed to be as expressive as the WL test for distinguishing non-isomorphic graphs.

8.- Higher-order WL tests use k-tuples of nodes to improve expressivity, but with increased computational complexity.

9.- Equivariant GNNs aim to match k-WL test expressivity but face practical limitations due to high memory requirements.

10.- Recent work focuses on designing 3WL-expressive GNNs without cubic memory complexity.

11.- Benchmark datasets should be representative, realistic, and medium to large-sized to statistically separate GNN performance.

12.- The lecture introduces datasets for graph regression, classification, node classification, and link prediction tasks.

13.- Experimental settings include consistent data splits, optimizer settings, and parameter budgets for fair comparisons.

14.- Message passing GCNs outperformed WL GNNs on all benchmark datasets, possibly due to better scalability.

15.- Anisotropic mechanisms improve isotropic GCNs, with attention mechanisms showing good generalization capabilities.

16.- Structural encodings from GCNs cannot differentiate isomorphic nodes, limiting expressivity.

17.- Positional encodings can break structural symmetry, providing unique representations for each node.

18.- Good positional encodings should be unique and distance-sensitive, but cannot have a canonical representation due to graph symmetries.

19.- Laplacian positional encodings use eigenvectors of the normalized Laplacian matrix as a hybrid structural-positional encoding.

20.- During training, sign flips of Laplacian eigenvectors are randomly sampled to ensure independence from arbitrary choices.

21.- Laplacian positional encodings significantly improved performance on highly structured graphs and link prediction tasks.

22.- GCNs may fail in link prediction tasks due to inability to differentiate between isomorphic nodes.

23.- Expressive GCNs for link prediction require joint representation of nodes, encoding distances between nodes.

24.- Edge representations with positional encodings enhance link prediction performance.

25.- The lecture concludes that message passing GCNs outperform WL GNNs on benchmark datasets.

26.- Graph sparsity, batch normalization, and residual connections are universal building blocks for effective GCNs.

27.- Anisotropic mechanisms improve isotropic GCNs in practice.

28.- Laplacian eigenvectors offer improvements over simple index positional encodings.

29.- Recent work aims to improve efficiency of WL techniques while maintaining expressivity.

30.- Future research should focus on matching theoretical advances with practical performance through rigorous benchmarking.

Knowledge Vault built byDavid Vivancos 2024