The Role of Generator Access in Autoregressive Post-Training
Abstract
We study how generator access constrains autoregressive post-training. The central question is whether the learner is confined to fresh root-start rollouts or can return to previously built prefixes and query the next-token rule there. In the root-start regime, output sampling, generated-token log probabilities, top- reports, and full next-token distributions along sampled trajectories all reduce to one canonical experiment, limited by the on-policy probability of reaching informative prefixes. Weak prefix control breaks this barrier, and once control is available, richer observations such as conditional sampling or logits can outperform top- access. Changing only the generator interface creates an exponential gap for KL-regularized outcome-reward post-training.
Source: arXiv:2604.04855v1 - http://arxiv.org/abs/2604.04855v1 PDF: https://arxiv.org/pdf/2604.04855v1 Original Link: http://arxiv.org/abs/2604.04855v1