Clarification on osunlp/UGround-V1-2B training origin

#2
by dqdw - opened

Dear [Developer/Team],

I've been using osunlp/UGround-V1-2B recently, and I find it quite useful for my tasks.

Before building on top of it, I would like to understand its connection with Qwen/Qwen2-VL-2B:

Direct Fine-tuning: Is it derived directly from Qwen/Qwen2-VL-2B, or through other checkpoints?

Inheritance: Does it keep the same architecture and weights as Qwen/Qwen2-VL-2B?

Understanding this will help me ensure compatibility with the Qwen/Qwen2-VL-2B ecosystem.

Thank you for your time and support!

OSU NLP Group org

it's the same arch. no change.

Sign up or log in to comment