Qwen3-VL-2B-Instruct fine-tuned on subsets of the FineVision dataset translated to Bulgarian language - https://huggingface.co/datasets/petkopetkov/FineVision-bg

Initial results on translated versions of MMMU_val, MMStar and MME:

Benchmark Metric Base (Qwen2-VL 2B Instruct) Finetuned Δ (Finetuned − Base) Δ %
MMMU_val_bg score 0.29 0.2911 +0.0011 +0.38%
MMStar_bg average 0.0215 0.0074 -0.0141 -65.58%
MMStar_bg coarse perception 0.0924 0.0000 -0.0924 -100.00%
MMStar_bg fine-grained perception 0.0112 0.0140 +0.0028 +25.00%
MMStar_bg instance reasoning 0.0150 0.0067 -0.0083 -55.33%
MMStar_bg logical reasoning 0.0066 0.0030 -0.0036 -54.55%
MMStar_bg math 0.0039 0.0000 -0.0039 -100.00%
MMStar_bg science & technology 0.0000 0.0206 +0.0206 —
MME_bg mme_cognition_score 411.4286 421.0714 +9.6428 +2.34%
MME_bg mme_perception_score 1310.1282 1307.4054 -2.7228 -0.21%

Training and evaluation code is available on https://github.com/petkokp/llm_notebooks/tree/main/bulgarian_llms

Uploaded model

  • Developed by: petkopetkov
  • License: apache-2.0
  • Finetuned from model : unsloth/Qwen3-VL-2B-Instruct

This qwen3_vl model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
34
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for petkopetkov/Qwen3-VL-2B-Instruct-bg

Finetuned
(10)
this model

Collection including petkopetkov/Qwen3-VL-2B-Instruct-bg