RLVE
Collection
Models for "RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments" - https://arxiv.org/abs/2511.07317
•
3 items
•
Updated
•
4
For full information, go check out the RLVE paper here.
This model is trained on top of Openthinker3 1.5B using RLVE. For more details on RLVE, check out the figure below, or see our paper for more details.
We provide evaluation instructions in our repository.
| Benchmark | AIME 2024 (Avg@64) | AIME 2025 (Avg@64) | OMEGA-500 (Avg@4) | OlympiadBench (Avg@4) | BBEH (Avg@4) | LiveCodeBench-v6 (Pass@8) |
|---|---|---|---|---|---|---|
| OpenThinker3-1.5B (starting model) | 54.32 | 42.03 | 25.15 | 56.85 | 4.00 | 28.17 |
| OpenThinker3-1.5B-RLVE | 58.18 | 49.90 | 29.45 | 62.67 | 7.13 | 34.07 |
Apache 2.0 License
For training details and hyperparameters. please see our repository.
In particular, you can rerun training for this model with this command (after setting up the repository):
bash scripts/training/OpenThinker3-1.5B/rlve/num-environment=400.sh RLVE
@article{zeng2025rlve,
title={RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments},
author={Zeng, Zhiyuan and Ivison, Hamish and Wang, Yiping and Yuan, Lifan and Li, Shuyue Stella and Ye, Zhuorui and Li, Siting and He, Jacqueline and Zhou, Runlong and Chen, Tong and Zhao, Chenyang and Tsvetkov, Yulia and Du, Simon Shaolei and Jaques, Natasha and Peng, Hao and Koh, Pang Wei and Hajishirzi, Hannaneh},
journal={arXiv preprint 2511.07317},
year={2025}
}
Base model
Qwen/Qwen2.5-1.5B