For full information, go check out the RLVE paper here.

OpenThinker3 1.5B RLVE

This model is trained on top of Openthinker3 1.5B using RLVE. For more details on RLVE, check out the figure below, or see our paper for more details.

Figure 1

Evaluation Results

We provide evaluation instructions in our repository.

Benchmark AIME 2024 (Avg@64) AIME 2025 (Avg@64) OMEGA-500 (Avg@4) OlympiadBench (Avg@4) BBEH (Avg@4) LiveCodeBench-v6 (Pass@8)
OpenThinker3-1.5B (starting model) 54.32 42.03 25.15 56.85 4.00 28.17
OpenThinker3-1.5B-RLVE 58.18 49.90 29.45 62.67 7.13 34.07

Intended uses & limitations

Apache 2.0 License

Training

For training details and hyperparameters. please see our repository.

In particular, you can rerun training for this model with this command (after setting up the repository):

bash scripts/training/OpenThinker3-1.5B/rlve/num-environment=400.sh RLVE

Links

Citation

@article{zeng2025rlve,
  title={RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments},
  author={Zeng, Zhiyuan and Ivison, Hamish and Wang, Yiping and Yuan, Lifan and Li, Shuyue Stella and Ye, Zhuorui and Li, Siting and He, Jacqueline and Zhou, Runlong and Chen, Tong and Zhao, Chenyang and Tsvetkov, Yulia and Du, Simon Shaolei and Jaques, Natasha and Peng, Hao and Koh, Pang Wei and Hajishirzi, Hannaneh},
  journal={arXiv preprint 2511.07317},
  year={2025}
}
Downloads last month
45
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hamishivi/OpenThinker3-1.5B-RLVE

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1)
this model
Quantizations
2 models

Collection including hamishivi/OpenThinker3-1.5B-RLVE