Back to Explorer
Research PaperResearchia:202601.29183[Statistics & ML > Statistics]

Efficient Stochastic Optimisation via Sequential Monte Carlo

James Cuin

Abstract

The problem of optimising functions with intractable gradients frequently arise in machine learning and statistics, ranging from maximum marginal likelihood estimation procedures to fine-tuning of generative models. Stochastic approximation methods for this class of problems typically require inner sampling loops to obtain (biased) stochastic gradient estimates, which rapidly becomes computationally expensive. In this work, we develop sequential Monte Carlo (SMC) samplers for optimisation of functions with intractable gradients. Our approach replaces expensive inner sampling methods with efficient SMC approximations, which can result in significant computational gains. We establish convergence results for the basic recursions defined by our methodology which SMC samplers approximate. We demonstrate the effectiveness of our approach on the reward-tuning of energy-based models within various settings.


Source: arXiv:2601.22003v1 - http://arxiv.org/abs/2601.22003v1 PDF: https://arxiv.org/pdf/2601.22003v1 Original Link: http://arxiv.org/abs/2601.22003v1

Submission:1/29/2026
Comments:0 comments
Subjects:Statistics; Statistics & ML
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!