Clarification on osunlp/UGround-V1-2B training origin
#2
by
dqdw - opened
Dear [Developer/Team],
I've been using osunlp/UGround-V1-2B recently, and I find it quite useful for my tasks.
Before building on top of it, I would like to understand its connection with Qwen/Qwen2-VL-2B:
Direct Fine-tuning: Is it derived directly from Qwen/Qwen2-VL-2B, or through other checkpoints?
Inheritance: Does it keep the same architecture and weights as Qwen/Qwen2-VL-2B?
Understanding this will help me ensure compatibility with the Qwen/Qwen2-VL-2B ecosystem.
Thank you for your time and support!
Hi, you can find more details here: https://github.com/OSU-NLP-Group/UGround/blob/3b81756699c089d2555530c99f317fad37e7c396/train/qwen2_vl/finetune_uground_v1_2b.yaml#L58
it's the same arch. no change.