Back to AI Research

AI Research

Can RL Teach Long-Horizon Reasoning to LLMs? Expres... | AI Research

Key Takeaways

  • Can RL Teach Long-Horizon Reasoning to LLMs?
  • Expressiveness Is Key Researchers are investigating why large language models (LLMs) often struggle with complex...
  • We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.
  • Expressiveness Is Key Researchers are investigating why large language models (LLMs) often struggle with complex, multi-step reasoning.
  • Researchers are investigating why large language models (LLMs) often struggle with complex, multi-step reasoning.
Paper AbstractExpand

Reinforcement learning (RL) has been applied to improve large language model (LLM) reasoning, yet the systematic study of how training scales with task difficulty has been hampered by the lack of controlled, scalable environments. We introduce ScaleLogic, a synthetic logical reasoning framework that offers independent control over two axes of difficulty: the depth of the required proof planning (i.e., the horizon) and the expressiveness of the underlying logic. Our proposed framework supports a wide range of logics: from simple implication-only logic ("if-then") towards more expressive first-order reasoning with conjunction ("and"), disjunction ("or"), negation ("not"), and universal quantification ("for all"). Using this framework, we show that the RL training compute $T$ follows a power law with respect to reasoning depth $D$ ($T \propto D^{\gamma}$, $R^{2} > 0.99$), and that the scaling exponent $\gamma$ increases monotonically with logical expressiveness, from $1.04$ to $2.60$. On downstream mathematics and general reasoning benchmarks, more expressive training settings yield both larger performance gains (up to $+10.66$ points) and more compute-efficient transfer compared to less expressive settings, demonstrating that what a model is trained on, not just how much it is trained, shapes downstream transfer. We further show that the power-law relationship holds across multiple RL methods, and curriculum-based training substantially improves scaling efficiency.

Can RL Teach Long-Horizon Reasoning to LLMs? Expressiveness Is Key
Researchers are investigating why large language models (LLMs) often struggle with complex, multi-step reasoning. While reinforcement learning (RL) has improved model performance in areas like math and coding, these gains often disappear when tasks require long sequences of logical steps. This paper introduces "ScaleLogic," a synthetic framework designed to systematically measure how training difficulty—specifically the depth of reasoning and the complexity of the logic involved—affects an LLM's ability to learn. By controlling these variables, the authors provide a clearer picture of how to scale RL training effectively.

A Controlled Environment for Reasoning

Current methods for training LLMs often lack the ability to precisely control the difficulty of reasoning tasks. ScaleLogic addresses this by generating synthetic problems that require the model to identify a correct conclusion from a set of facts. The framework allows researchers to independently adjust two key factors: the "horizon" (the depth of the proof tree required to reach a conclusion) and the "logical expressiveness" (the complexity of the rules, such as adding "and," "or," "not," or "for all" statements). Because these problems are automatically generated and verifiable, they provide a scalable and low-cost way to test how models handle increasingly difficult logical chains.

The Power Law of Training Compute

The study reveals a consistent relationship between the effort required to train a model and the difficulty of the tasks it must solve. The authors found that the amount of training compute needed to reach a specific accuracy threshold follows a "power law" relative to the depth of the reasoning required. Crucially, the "scaling exponent"—a measure of how much harder the training becomes as tasks get deeper—increases significantly as the logic becomes more expressive. For simple "if-then" logic, the exponent is 1.04, but it rises to 2.60 for more complex first-order logic. This demonstrates that as logical complexity increases, the training effort required to master deeper reasoning grows disproportionately.

Impact on Real-World Performance

Beyond synthetic tasks, the researchers tested whether this training translates to real-world benchmarks in mathematics and general reasoning. They discovered that training on highly expressive synthetic data leads to better performance on these external benchmarks, with improvements of up to 10.66 percentage points. The results suggest that the "quality" of the training data—specifically its logical expressiveness—is just as important as the quantity. Models trained on more expressive logic showed more efficient and robust transfer to real-world problems, whereas models trained on simpler logic tended to plateau early, failing to gain the same level of reasoning capability.

Key Takeaways for Future Training

The findings suggest that the way we structure training data fundamentally shapes how well an LLM learns to reason. The authors also noted that using a "curriculum"—a structured approach to training that introduces difficulty gradually—can significantly improve the efficiency and stability of the learning process. By providing a framework that isolates these variables, the paper offers a roadmap for developers to move beyond simple data scaling and toward more targeted, compute-efficient strategies for teaching LLMs how to handle long-horizon, complex reasoning.

Comments (0)

No comments yet

Be the first to share your thoughts!