Back to Explorer
Research PaperResearchia:202603.20059[Data Science > Machine Learning]

Rigorous Error Certification for Neural PDE Solvers: From Empirical Residuals to Solution Guarantees

Amartya Mukherjee

Abstract

Uncertainty quantification for partial differential equations is traditionally grounded in discretization theory, where solution error is controlled via mesh/grid refinement. Physics-informed neural networks fundamentally depart from this paradigm: they approximate solutions by minimizing residual losses at collocation points, introducing new sources of error arising from optimization, sampling, representation, and overfitting. As a result, the generalization error in the solution space remains an open problem. Our main theoretical contribution establishes generalization bounds that connect residual control to solution-space error. We prove that when neural approximations lie in a compact subset of the solution space, vanishing residual error guarantees convergence to the true solution. We derive deterministic and probabilistic convergence results and provide certified generalization bounds translating residual, boundary, and initial errors into explicit solution error guarantees.


Source: arXiv:2603.19165v1 - http://arxiv.org/abs/2603.19165v1 PDF: https://arxiv.org/pdf/2603.19165v1 Original Link: http://arxiv.org/abs/2603.19165v1

Submission:3/20/2026
Comments:0 comments
Subjects:Machine Learning; Data Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!