LongCat0830's picture
Update README.md (#1)
120cb1f verified
|
raw
history blame
9.69 kB
metadata
language:
  - zh
  - en
license: apache-2.0
base_model: Qwen/Qwen3-14B
library_name: transformers
tags:
  - qwen
  - scoring
  - grading
  - evaluation
  - llm-judge
pipeline_tag: text-generation

UNO-Scorer: A Unified General Scoring Model for UNO-Bench

Project_Page Paper Base Model License

๐Ÿ“– Introduction

UNO-Scorer is a lightweight yet high-precision LLM-based evaluation model designed to efficiently automate the evaluation of Large Multimodal Models (LMMs) with minimal computational overhead.

Core Functionality:

  • Input: Question + Reference Answer + Model Response
  • Processing: Analyzes correctness by comparing each sub-question against the reference answer
  • Output: Numerical score + Detailed evaluation reasoning for each sub-question

Built upon the powerful Qwen3-14B backbone, UNO-Scorer is fine-tuned on 13K high-quality in-house data. It overcomes the limitations of traditional Overall Reward Models (ORMs) by supporting 6 distinct question types, with particular excellence in Multi-Step Open-Ended Questions (MO).

๐Ÿ“Š Performance

UNO-Scorer demonstrates superior performance in automated evaluation, particularly in handling complex Multi-Step Open-Ended Questions. We compared the accuracy of our scorer against other advanced evaluators on our test set:

Model Accuracy
Seed-1.5-VL 0.9118
GPT-4.1 0.9457
UNO-Scorer (Ours) 0.9505

Experiments show that UNO-Scorer surpasses even proprietary frontier models like GPT-4.1 in this specific evaluation domain with lower cost.

๐Ÿ’ป Usage

โšก Quick Start (HuggingFace Transformers)

Get started with UNO-Scorer in just a few lines of code:

pip install -U transformers
python3 test_scorer_hf.py --model-name /path/to/UNO-Scorer

Minimal Example:

โš ๏ธ Critical: The prompt template below is simplified for illustration. Only the complete prompt template in test_scorer_hf.py will properly activate the model's fine-tuned scoring capabilities. Custom or simplified prompts will not achieve optimal results.

from transformers import AutoModelForCausalLM, AutoTokenizer
import re

def extract_score(text):
    matches = re.findall(r'<score>([\d.]+)</score>', text)
    return float(matches[-1]) if matches else 0.0

tokenizer = AutoTokenizer.from_pretrained("meituan-longcat/UNO-Scorer-Qwen3-14B")
model = AutoModelForCausalLM.from_pretrained(
    "meituan-longcat/UNO-Scorer-Qwen3-14B",
    torch_dtype="auto",
    device_map="auto"
)

# Prepare scoring prompt
question = "Which animal appears in the image?"
reference = "Sub-question 1: Elephant, total score 10 points"
response = "I see an elephant in the image."

prompt = f"""Please score the model's response based on the reference answer.

Question: {question}
Reference Answer: {reference}
Model Response: {response}

Provide a step-by-step analysis and output the total score in <score></score> tags."""

# Generate score
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)

score = extract_score(result)
print(f"Score: {score}/10")

๐Ÿ”„ How It Works

UNO-Scorer evaluates model responses through a structured process:

  1. Information Organization: Extracts question content, reference answer, model response, and scoring criteria
  2. Question Type Classification: Identifies the question type (multiple-choice, numerical, enumeration, yes/no, short-answer, or essay)
  3. Detailed Comparison: Compares model response against reference answer using type-specific criteria
  4. Score Extraction: Outputs final score in <score>X</score> format (where X is 0-10)

๐Ÿ“ฅ Input Format Requirements

The model expects three key inputs:

Component Description Example
Question The original question posed to the model "Which animals appear in the image?"
Reference Answer Ground truth answer with point allocation (sum to 10) Sub-question 1: Elephant, total score 10 points
Model Response The response from the model being evaluated "I see an elephant in the image."

Reference Answer Formatting (Critical!)

Since the model is trained primarily on Chinese corpora, formatting reference answers in Chinese yields significantly better results. However, English formatting is also supported.

For Single-Answer Questions:

1. {Answer}, total score 10 points, focus only on final answer correctness
1. {็ญ”ๆกˆ}๏ผŒๆ€ปๅˆ†10ๅˆ†๏ผŒๆ— ้œ€ๅ…ณๆณจๆŽจ็†่ฟ‡็จ‹๏ผŒๆœ€็ปˆ็ญ”ๆกˆๆญฃ็กฎๅณๅฏ

For Multi-Part Questions:

1. {Sub-Answer A} ({X} points); 2. {Sub-Answer B} ({Y} points)
1. {ๅญ็ญ”ๆกˆA}๏ผˆ{X}ๅˆ†๏ผ‰; 2. {ๅญ็ญ”ๆกˆB}๏ผˆ{Y}ๅˆ†๏ผ‰

With Custom Scoring Criteria:

1. {Answer}, total score 10 points, scoring criteria: {detailed criteria}
1. {็ญ”ๆกˆ}๏ผŒๆ€ปๅˆ†10ๅˆ†๏ผŒ่ฏ„ๅˆ†ๆ ‡ๅ‡†๏ผš{่ฏฆ็ป†ๆ ‡ๅ‡†}

๐Ÿ“ค Output Format

The model returns:

  • Detailed Evaluation: Step-by-step analysis for each sub-question
  • Score Tag: <score>X</score> where X ranges from 0 to 10

Example output:

Sub-question 1:

Question Content: How many apples are in the image?
Reference Answer: 2
Model Response: There are two appels.
Points: 1 point
Question Type: Numerical
Comparison Process: The reference answer is "2" and the model response is "two". The numerical values are completely identical, with only the expression format differing. This meets the scoring standard for numerical questions.

Scoring Explanation: Completely correct, awarded 10 point.

<score>10</score>

๐Ÿ“‹ Complete Evaluation Example

See test_scorer_hf.py for a full working example with multiple question types:

  • Multiple-choice questions
  • Yes/No questions
  • Open-ended questions
  • Multi-part questions

Run the example:

python3 test_scorer_hf.py --model-name /path/to/UNO-Scorer

๐Ÿš€ Optimized Inference with vLLM (Recommended for Production)

For large-scale evaluation tasks, we strongly recommend using vLLM for significant performance improvements:

# 1. Clone the repository
git clone https://github.com/meituan-longcat/UNO-Bench.git
cd UNO-Bench/uno-eval

# 2. Install dependencies
pip install -r requirements.txt

# 3. Run vLLM-based inference
bash examples/test_scorer_vllm.sh

Why vLLM?

  • 10-20x faster inference compared to standard HuggingFace
  • Better batching support for multiple evaluation tasks
  • Lower memory footprint
  • Optimized for production deployments

โš ๏ธ Important Notes

  1. Language: Chinese formatting in reference answers produces significantly better results due to the model's training data composition
  2. Point Allocation: Reference answers must have total points equal to 10
  3. Score Extraction: Always look for <score>X</score> in the output
  4. Batch Processing: Use vLLM for evaluating multiple responses efficiently
  5. Question Type Awareness: Ensure reference answers clearly specify the question type for optimal scoring

๐ŸŽฏ Supported Question Types

UNO-Scorer supports evaluation across 6 distinct question types:

Question Type Description Scoring Rule
Multiple-Choice Select correct option from given choices Response must match the correct option exactly
Numerical Provide specific numerical values No tolerance for numerical errors
Enumeration List all required items Must include all items, no omissions or errors
Yes/No Binary judgment questions Response judgment must match reference answer
Short-Answer Brief factual answers Semantic equivalence acceptable, expression flexibility allowed
Essay Longer analytical responses Must contain core viewpoints from reference answer

๐Ÿ“œ Citation

If you find this model or the UNO-Bench useful for your research, please cite our paper:

@misc{chen2025unobench,
      title={UNO-Bench: A Unified Benchmark for Exploring the Compositional Law Between Uni-modal and Omni-modal in Omni Models},
      author={Chen Chen and ZeYang Hu and Fengjiao Chen and Liya Ma and Jiaxing Liu and Xiaoyu Li and Ziwen Wang and Xuezhi Cao and Xunliang Cai},
      year={2025},
      eprint={2510.18915},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.18915},
}

โš–๏ธ License & Disclaimer

This model is released under the Apache 2.0 License. It is based on Qwen3-14B. Please strictly follow the license and usage policy of the original Qwen model series.

Disclaimer: This model is designed for research and evaluation purposes. Users are responsible for ensuring their use complies with applicable laws and regulations.

๐Ÿค Contributing

We welcome contributions and feedback! Please feel free to:

  • Report issues or bugs
  • Suggest improvements
  • Share your evaluation results
  • Contribute enhancements

For more information, visit our GitHub repository.