A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference
Abstract
A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: prove that each layer was computed correctly up to tolerance $δ$; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computation...
Description / Details
A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up to tolerance ; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computations suffice to steer the final output arbitrarily (within a prescribed bounded range).
Source: arXiv:2602.15756v1 - http://arxiv.org/abs/2602.15756v1 PDF: https://arxiv.org/pdf/2602.15756v1 Original Link: http://arxiv.org/abs/2602.15756v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Feb 18, 2026
Computer Science
Cybersecurity
0