AutoGEO_mini_Qwen1.7B_ResearchyGEO

A lightweight web-document rewriting model fine-tuned with GRPO (reinforcement learning) from Qwen3-1.7B, developed as part of the AutoGEO framework introduced in:

WHAT GENERATIVE SEARCH ENGINES LIKE AND HOW TO OPTIMIZE WEB CONTENT COOPERATIVELY
Paper (arXiv): https://arxiv.org/abs/2510.11438


What this model does

AutoGEO_mini_Qwen1.7B_ResearchyGEO rewrites raw web documents into improved versions that are better aligned with generative search enginesโ€™ preferences for Researchy-GEO dataset.

In our experiments/usage:

  • The total cost is about 0.0071ร— the cost of gemini-2.5-pro for comparable rewriting workloads.
  • Rewritten documents achieve significant improvements in GEO metrics.

Training summary

  • Base model: Qwen3-1.7B
  • Method: GRPO-based reinforcement learning fine-tuning
  • Task: Rewrite original web documents to improve GEO metrics (per the AutoGEO framework in the paper above)

Repository contents

This repository includes the standard inference artifacts (e.g., model.safetensors, config.json, tokenizer.json, chat_template.jinja, etc.) required to load and run the model with transformers.


Downloads last month
21
Safetensors
Model size
2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cx-cmu/AutoGEO_mini_Qwen1.7B_ResearchyGEO

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(373)
this model
Quantizations
2 models

Space using cx-cmu/AutoGEO_mini_Qwen1.7B_ResearchyGEO 1