Search and Refine During Think: Autonomous Retrieval-Augmented Reasoning of LLMs

The model was presented in the paper Search and Refine During Think: Autonomous Retrieval-Augmented Reasoning of LLMs.

Paper abstract

Large language models have demonstrated impressive reasoning capabilities but are inherently limited by their knowledge reservoir. Retrieval-augmented reasoning mitigates this limitation by allowing LLMs to query external resources, but existing methods often retrieve irrelevant or noisy information, hindering accurate reasoning. In this paper, we propose AutoRefine, a reinforcement learning post-training framework that adopts a new ``search-and-refine-during-think'' paradigm. AutoRefine introduces explicit knowledge refinement steps between successive search calls, enabling the model to iteratively filter, distill, and organize evidence before generating an answer. Furthermore, we incorporate tailored retrieval-specific rewards alongside answer correctness rewards using group relative policy optimization. Experiments on single-hop and multi-hop QA benchmarks demonstrate that AutoRefine significantly outperforms existing approaches, particularly in complex, multi-hop reasoning scenarios. Detailed analysis shows that AutoRefine issues frequent, higher-quality searches and synthesizes evidence effectively.

Code

The code for this project is available on GitHub: https://github.com/volcengine/verl (Note: this links to the base project mentioned in acknowledgements, a more specific repo link should be added if available within the model repo itself.)

Downloads last month
176
Safetensors
Model size
3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yrshi/AutoRefine-Qwen2.5-3B-Base

Quantizations
1 model

Collection including yrshi/AutoRefine-Qwen2.5-3B-Base