$π$-StepNFT: Wider Space Needs Finer Steps in Online RL for Flow-based VLAs
Abstract
Flow-based vision-language-action (VLA) models excel in embodied control but suffer from intractable likelihoods during multi-step sampling, hindering online reinforcement learning. We propose \textbf{\textit{-StepNFT}} (Step-wise Negative-aware Fine-Tuning), a critic-and-likelihood-free framework that requires only a single forward pass per optimization step and eliminates auxiliary value networks. We identify that wider exploration spaces necessitate finer-grained, step-wise guidance for alignment. Empirically, -StepNFT unlocks latent potential on LIBERO with competitive few-shot robustness. Moreover, it achieves superior generalization on ManiSkill, outperforming value-based baselines in OOD scenarios by preventing overfitting to multimodal features. This property offers a scalable solution promising for complex real-world applications.
Source: arXiv:2603.02083v1 - http://arxiv.org/abs/2603.02083v1 PDF: https://arxiv.org/pdf/2603.02083v1 Original Link: http://arxiv.org/abs/2603.02083v1