TY - JOUR
T1 - Taxonomy of Benchmarks in Graph Representation Learning
AU - Liu, Renming
AU - Cantürk, Semih
AU - Wenkel, Frederik
AU - McGuire, Sarah
AU - Wang, Xinyi
AU - Little, Anna
AU - O'Bray, Leslie
AU - Perlmutter, Michael
AU - Rieck, Bastian
AU - Hirn, Matthew
AU - Wolf, Guy
AU - Rampášek, Ladislav
N1 - Publisher Copyright:
© 2022 Proceedings of Machine Learning Research. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a sensitivity profile that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach and implementation in GTaxoGym package1 are extendable to multiple graph prediction task types and future datasets.
AB - Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a sensitivity profile that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach and implementation in GTaxoGym package1 are extendable to multiple graph prediction task types and future datasets.
UR - http://www.scopus.com/inward/record.url?scp=85164538462&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85164538462
VL - 198
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 1st Learning on Graphs Conference, LOG 2022
Y2 - 9 December 2022 through 12 December 2022
ER -