A proximal gradient algorithm for composite log-concave sampling
Abstract
We propose an algorithm to sample from composite log-concave distributions over $\mathbb{R}^d$, i.e., densities of the form $π\propto e^{-f-g}$, assuming access to gradient evaluations of $f$ and a restricted Gaussian oracle (RGO) for $g$. The latter requirement means that we can easily sample from the density $\text{RGO}_{g,h,y}(x) \propto \exp(-g(x) -\frac{1}{2h}||y-x||^2)$, which is the sampling analogue of the proximal operator for $g$. If $f + g$ is $α$-strongly convex and $f$ is $β$-smooth...
Description / Details
We propose an algorithm to sample from composite log-concave distributions over , i.e., densities of the form , assuming access to gradient evaluations of and a restricted Gaussian oracle (RGO) for . The latter requirement means that we can easily sample from the density , which is the sampling analogue of the proximal operator for . If is -strongly convex and is -smooth, our sampler achieves error in total variation distance in iterations where , which matches prior state-of-the-art results for the case . We further extend our results to cases where (1) is non-log-concave but satisfies a Poincaré or log-Sobolev inequality, and (2) is non-smooth but Lipschitz.
Source: arXiv:2605.12461v1 - http://arxiv.org/abs/2605.12461v1 PDF: https://arxiv.org/pdf/2605.12461v1 Original Link: http://arxiv.org/abs/2605.12461v1
Please sign in to join the discussion.
No comments yet. Be the first to share your thoughts!
May 13, 2026
Data Science
Statistics
0