mLLMs_merging_4_DMO
Collection
Official checkpoints from the paper "Linear Model Merging Unlocks Simple and Scalable Multimodal Data Mixture Optimization".
•
22 items
•
Updated
This is an official checkpoint from the paper: "Linear Model Merging Unlocks Simple and Scalable Multimodal Data Mixture Optimization " (link). See the official implementation for more information on how to use the models.
This model is a fine-tuned version of OpenGVLab/InternVL3_5-2B-Pretrained-HF on a custom dataset with OCR data (~100k samples).
It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.7653 | 0.125 | 100 | 0.7833 |
| 0.7638 | 0.25 | 200 | 0.7549 |
| 0.7181 | 0.375 | 300 | 0.7425 |
| 0.6936 | 0.5 | 400 | 0.7350 |
| 0.7319 | 0.625 | 500 | 0.7295 |
| 0.7119 | 0.75 | 600 | 0.7274 |
| 0.7076 | 0.875 | 700 | 0.7257 |
| 0.7244 | 1.0 | 800 | 0.7255 |
Base model
OpenGVLab/InternVL3_5-2B-Pretrained