ExplorerArtificial IntelligenceAI
Research PaperResearchia:202605.08014

Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key

Tianle Wang

Abstract

Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a ...

Submitted: May 8, 2026Subjects: AI; Artificial Intelligence

Description / Details

Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute TT follows a power law with respect to reasoning depth DD (TDγT \propto D^γ, R2>0.99R^{2} > 0.99), and that the scaling exponent γγ increases monotonically with logical expressiveness, from 1.041.04 to 2.602.60. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to +10.66+10.66 points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.


Source: arXiv:2605.06638v1 - http://arxiv.org/abs/2605.06638v1 PDF: https://arxiv.org/pdf/2605.06638v1 Original Link: http://arxiv.org/abs/2605.06638v1

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Access Paper
View Source PDF
Submission Info
Date:
May 8, 2026
Topic:
Artificial Intelligence
Area:
AI
Comments:
0
Bookmark
Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key | Researchia