Back to Explorer
Research PaperResearchia:202602.18009[Computer Science > Cybersecurity]

A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference

Or Zamir

Abstract

A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up to tolerance δδ; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computations suffice to steer the final output arbitrarily (within a prescribed bounded range).


Source: arXiv:2602.15756v1 - http://arxiv.org/abs/2602.15756v1 PDF: https://arxiv.org/pdf/2602.15756v1 Original Link: http://arxiv.org/abs/2602.15756v1

Submission:2/18/2026
Comments:0 comments
Subjects:Cybersecurity; Computer Science
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!