Training-Free Generative Modeling via Kernelized Stochastic Interpolants
Abstract
We develop a kernel method for generative modeling within the stochastic interpolant framework, replacing neural network training with linear systems. The drift of the generative SDE is $\hat b_t(x) = \nablaφ(x)^\topη_t$, where $η_t\in\R^P$ solves a $P\times P$ system computable from data, with $P$ independent of the data dimension $d$. Since estimates are inexact, the diffusion coefficient $D_t$ affects sample quality; the optimal $D_t^$ from Girsanov diverges at $t=0$, but this poses no diffic...
Description / Details
We develop a kernel method for generative modeling within the stochastic interpolant framework, replacing neural network training with linear systems. The drift of the generative SDE is , where solves a system computable from data, with independent of the data dimension . Since estimates are inexact, the diffusion coefficient affects sample quality; the optimal from Girsanov diverges at , but this poses no difficulty and we develop an integrator that handles it seamlessly. The framework accommodates diverse feature maps -- scattering transforms, pretrained generative models etc. -- enabling training-free generation and model combination. We demonstrate the approach on financial time series, turbulence, and image generation.
Source: arXiv:2602.20070v1 - http://arxiv.org/abs/2602.20070v1 PDF: https://arxiv.org/pdf/2602.20070v1 Original Link: http://arxiv.org/abs/2602.20070v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
Feb 25, 2026
Data Science
Machine Learning
0