Back to Explorer
Research PaperResearchia:202603.06069[Artificial Intelligence > AI]

Ensembling Language Models with Sequential Monte Carlo

Robin Shing Moon Chan

Abstract

Practitioners have access to an abundance of language models and prompting strategies for solving many language modeling tasks; yet prior work shows that modeling performance is highly sensitive to both choices. Classical machine learning ensembling techniques offer a principled approach: aggregate predictions from multiple sources to achieve better performance than any single one. However, applying ensembling to language models during decoding is challenging: naively aggregating next-token probabilities yields samples from a locally normalized, biased approximation of the generally intractable ensemble distribution over strings. In this work, we introduce a unified framework for composing KK language models into ff-ensemble distributions for a wide range of functions f ⁣:Rβ‰₯0Kβ†’Rβ‰₯0f\colon\mathbb{R}_{\geq 0}^{K}\to\mathbb{R}_{\geq 0}. To sample from these distributions, we propose a byte-level sequential Monte Carlo (SMC) algorithm that operates in a shared character space, enabling ensembles of models with mismatching vocabularies and consistent sampling in the limit. We evaluate a family of ff-ensembles across prompt and model combinations for various structured text generation tasks, highlighting the benefits of alternative aggregation strategies over traditional probability averaging, and showing that better posterior approximations can yield better ensemble performance.


Source: arXiv:2603.05432v1 - http://arxiv.org/abs/2603.05432v1 PDF: https://arxiv.org/pdf/2603.05432v1 Original Link: http://arxiv.org/abs/2603.05432v1

Submission:3/6/2026
Comments:0 comments
Subjects:AI; Artificial Intelligence
Original Source:
View Original PDF
arXiv: This paper is hosted on arXiv, an open-access repository
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!