When do neural ordinary differential equations generalize on complex networks?
Abstract
Neural ordinary differential equations (neural ODEs) can effectively learn dynamical systems from time series data, but their behavior on graph-structured data remains poorly understood, especially when applied to graphs with different size or structure than encountered during training. We study neural ODEs (s) with vector fields following the Barabási-Barzel form, trained on synthetic data from five common dynamical systems on graphs. Using the -model to generate graphs with realistic and tunable structure, we find that degree heterogeneity and the type of dynamical system are the primary factors in determining s' ability to generalize across graph sizes and properties. This extends to s' ability to capture fixed points and maintain performance amid missing data. Average clustering plays a secondary role in determining performance. Our findings highlight s as a powerful approach to understanding complex systems but underscore challenges emerging from degree heterogeneity and clustering in realistic graphs.
Source: arXiv:2602.08980v1 - http://arxiv.org/abs/2602.08980v1 PDF: https://arxiv.org/pdf/2602.08980v1 Original Link: http://arxiv.org/abs/2602.08980v1