PuSH - Publikationsserver des Helmholtz Zentrums München

O'Bray, L.* ; Horn, M.* ; Rieck, B. ; Borgwardt, K.*

Evaluation metrics for graph generative models: Problems, pitfalls, and practical solutions

In: (International Conference on Learning Representations, 25–29 April 2022, Virtual). 2022.

Graph generative models are a highly active branch of machine learning. Given the steady development of new models of ever-increasing complexity, it is necessary to provide a principled way to evaluate and compare them. In this paper, we enumerate the desirable criteria for such a comparison metric and provide an overview of the status quo of graph generative model comparison in use today,

which predominantly relies on the maximum mean discrepancy (MMD). We perform a systematic evaluation of MMD in the context of graph generative model comparison, highlighting some of the challenges and pitfalls researchers inadver-

tently may encounter. After conducting a thorough analysis of the behaviour of MMD on synthetically-generated perturbed graphs as well as on recently-proposed graph generative models, we are able to provide a suitable procedure to mitigate these challenges and pitfalls. We aggregate our findings into a list of practical recommendations for researchers to use when evaluating graph generative models.

Weitere Metriken?
Zusatzinfos bearbeiten [➜Einloggen]
Publikationstyp Artikel: Konferenzbeitrag
Konferenztitel International Conference on Learning Representations
Konferzenzdatum 25–29 April 2022
Konferenzort Virtual