ExplorerComputer ScienceCybersecurity
Research PaperResearchia:202602.18009

A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference

Or Zamir

Abstract

A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: prove that each layer was computed correctly up to tolerance $δ$; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computation...

Submitted: February 18, 2026Subjects: Cybersecurity; Computer Science

Description / Details

A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up to tolerance δδ; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computations suffice to steer the final output arbitrarily (within a prescribed bounded range).


Source: arXiv:2602.15756v1 - http://arxiv.org/abs/2602.15756v1 PDF: https://arxiv.org/pdf/2602.15756v1 Original Link: http://arxiv.org/abs/2602.15756v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 18, 2026
Topic:
Computer Science
Area:
Cybersecurity
Comments:
0
Bookmark
A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference | Researchia