Rigorous Error Certification for Neural PDE Solvers: From Empirical Residuals to Solution Guarantees
Abstract
Uncertainty quantification for partial differential equations is traditionally grounded in discretization theory, where solution error is controlled via mesh/grid refinement. Physics-informed neural networks fundamentally depart from this paradigm: they approximate solutions by minimizing residual losses at collocation points, introducing new sources of error arising from optimization, sampling, representation, and overfitting. As a result, the generalization error in the solution space remains an open problem. Our main theoretical contribution establishes generalization bounds that connect residual control to solution-space error. We prove that when neural approximations lie in a compact subset of the solution space, vanishing residual error guarantees convergence to the true solution. We derive deterministic and probabilistic convergence results and provide certified generalization bounds translating residual, boundary, and initial errors into explicit solution error guarantees.
Source: arXiv:2603.19165v1 - http://arxiv.org/abs/2603.19165v1 PDF: https://arxiv.org/pdf/2603.19165v1 Original Link: http://arxiv.org/abs/2603.19165v1