Skip to content
/ OREAL Public

Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning

License

Notifications You must be signed in to change notification settings

InternLM/OREAL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OREAL: Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning

license arXiv huggingface

✨ Introduction

main_fig

Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techniques that are believed certainly to be adopted are only reinforcement learning (RL) and the long chain of thoughts.

We proposes a new RL framework, termed OREAL, to pursue the performance limit that can be achieved through Outcome REwArd-based reinforcement Learning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible.

  • We theoretically prove that behavior cloning on positive trajectories from best-of-N (BoN) sampling is sufficient to learn the KL-regularized optimal policy in binary feedback environments.
  • This formulation further implies that the rewards of negative samples should be reshaped to ensure the gradient consistency between positive and negative samples.
  • To alleviate the long-existing difficulties brought by sparse rewards in RL, which are even exacerbated by the partial correctness of the long chain of thought for reasoning tasks, we further apply a token-level reward model to sample important tokens in reasoning trajectories for learning.

The OREAL implementation pseudocode is as follows:

algo

📃 Key Results

With OREAL, for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL, being on par with 32B models. OREAL-32B also surpasses previous 32B models trained by distillation with 95.0 pass@1 accuracy on MATH-500.

main_table

🤗 HuggingFace

Model

Our OREAL models are available on Hugging Face 🤗:

Model Huggingface Repo
OREAL-DeepSeek-R1-Distill-Qwen-7B Model Link
OREAL-7B Model Link
OREAL-32B Model Link

We also release the models of SFT version. You can construct your own RL pipeline on them:)

Model Huggingface Repo
OREAL-7B-SFT Model Link
OREAL-32B-SFT Model Link

Data

We release the prompts utilzed in our RL training phase.

Dataset Huggingface Repo
RL Prompts Model Link

🚄 Training Tutorial

1. Install Dependencies

OREAL utilizes XTuner as the training engine.

pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
pip install flash-attn --no-build-isolation
pip install -r requirements.txt

2. Prepare Data (Optional)

The training data can be found at HERE. The training script will automatically download the data from huggingface.

3. Start LLM Verifier Service

OREAL requires a language model as a verifier to evaluate the correctness of the generated solutions along with a rule based verificy function (see the source code). We use Qwen2.5-72B-Instruct as the verifier in our experiments. You can start the verifier service with lmdeploy by running the following command:

lmdeploy serve api_server Qwen/Qwen2.5-72B-Instruct --tp 4 --chat-template qwen --log-level INFO --server-port 10003

Or you can use any other inference engine such as sglang or vllm or ollama. Just make sure the verifier service can be reached by OpenAI-compatible API.

Fill in the verifier service address in the config file before training.

judgers_config = dict(
    math_judger=dict(  # math judger related settings
        hosts=["x.x.x.x:xxxx", "x.x.x.x:xxxx"],  # verifier service addresses
        stop_word=stop_word,
        thinking_finish_words=["<conclude>", "**Final Answer**", "</think>"],
        num_processes=8,
        concurrency_per_proc=(8, 8),
    )
)

4. Train OREAL

OREAL-7B

7B requires 32 GPUs to train. You can use the following command to train the model with OREAL-7B-SFT as the initial policy:

torchrun --nnodes 4 --nproc_per_node 8 --master_addr $MASTER_ADDR --node_rank $RANK --master_port $MASTER_PORT train_oreal.py oreal/configs/oreal_w_tokenrm_OREAL-7B-SFT_seqlen16k.py --total_steps 90 --work_dir ./work_dir/oreal_w_tokenrm_OREAL-7B-SFT_seqlen16k

It takes about 9 hours to train the model 90 steps with 32xA100.

OREAL-32B

32B requires 128 GPUs to train. You can use the following command to train the model with OREAL-32B-SFT as the initial policy:

torchrun --nnodes 16 --nproc_per_node 8 --master_addr $MASTER_ADDR --node_rank $RANK --master_port $MASTER_PORT train_oreal.py oreal/configs/oreal_w_tokenrm_OREAL-32B-SFT_seqlen16k.py --total_steps 90 --work_dir ./work_dir/oreal_w_tokenrm_OREAL-32B-SFT_seqlen16k

More detailed training settings can be found in the oreal/configs folder.

Note:

  • The best checkpoint may not be the last one. Consider evaluating during training and early stopping when the performance is saturated.

🖊️ Citation

@article{lyu2025exploring,
  title={Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning},
  author={Lyu, Chengqi and Gao, Songyang and Gu, Yuzhe and Zhang, Wenwei and Gao, Jianfei and Liu, Kuikun and Wang, Ziyi and Li, Shuaibin and Zhao, Qian and Huang, Haian and others},
  journal={arXiv preprint arXiv:2502.06781},
  year={2025}
}

💳 License

This project is released under the Apache 2.0 license.

About

Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages