ExplorerData ScienceMachine Learning
Research PaperResearchia:202602.05006

Pseudo-Invertible Neural Networks

Yamit Ehrlich

Abstract

The Moore-Penrose Pseudo-inverse (PInv) serves as the fundamental solution for linear systems. In this paper, we propose a natural generalization of PInv to the nonlinear regime in general and to neural networks in particular. We introduce Surjective Pseudo-invertible Neural Networks (SPNN), a class of architectures explicitly designed to admit a tractable non-linear PInv. The proposed non-linear PInv and its implementation in SPNN satisfy fundamental geometric properties. One such property is n...

Submitted: February 5, 2026Subjects: Machine Learning; Data Science

Description / Details

The Moore-Penrose Pseudo-inverse (PInv) serves as the fundamental solution for linear systems. In this paper, we propose a natural generalization of PInv to the nonlinear regime in general and to neural networks in particular. We introduce Surjective Pseudo-invertible Neural Networks (SPNN), a class of architectures explicitly designed to admit a tractable non-linear PInv. The proposed non-linear PInv and its implementation in SPNN satisfy fundamental geometric properties. One such property is null-space projection or "Back-Projection", x=x+A(yAx)x' = x + A^\dagger(y-Ax), which moves a sample xx to its closest consistent state xx' satisfying Ax=yAx=y. We formalize Non-Linear Back-Projection (NLBP), a method that guarantees the same consistency constraint for non-linear mappings f(x)=yf(x)=y via our defined PInv. We leverage SPNNs to expand the scope of zero-shot inverse problems. Diffusion-based null-space projection has revolutionized zero-shot solving for linear inverse problems by exploiting closed-form back-projection. We extend this method to non-linear degradations. Here, "degradation" is broadly generalized to include any non-linear loss of information, spanning from optical distortions to semantic abstractions like classification. This approach enables zero-shot inversion of complex degradations and allows precise semantic control over generative outputs without retraining the diffusion prior.


Source: arXiv:2602.06042v1 - http://arxiv.org/abs/2602.06042v1 PDF: https://arxiv.org/pdf/2602.06042v1 Original Article: View on arXiv

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
Feb 5, 2026
Topic:
Data Science
Area:
Machine Learning
Comments:
0
Bookmark