Arash A. Amini

Associate Professor, UCLA


Limits and Potentials of Graph Neural Networks: From Spectral Simplicity to Polynomial Bounds


This talk explores the interplay between complexity and efficiency in graph neural networks (GNNs), drawing on both theoretical and empirical insights. We start by providing empirical evidence on how spectral GNNs for semi-supervised node classification benefit from simplicity. We challenge the need for complex GNN architectures, showing simpler methods can rival or exceed their performance. This motivates a theoretical deep dive into polynomial GNNs (Poly-GNNs), examining the effect of 'depth' (higher-order polynomial features) within a contextual stochastic block model. Contrary to expectations, we discover that increasing depth mainly modifies constants, without altering the fundamental rates of separability. This finding emphasizes the profound effect of 'network noise' in deep GNNs and questions the prevailing belief that greater complexity necessarily improves discriminative power in GNNs.


Arash A. Amini received his Ph.D. in electrical engineering from University of California, Berkeley, in 2011. He is currently an Assistant Professor of Statistics at UCLA. His research interests include high-dimensional statistics, functional and nonparametric estimation, network data analysis, optimization and graphical models.

Link to website

Portrait of Arash Amini