Model Card for HiTZ/Latxa-Qwen3-VL-2B-Instruct

Latxa-Qwen3-VL-2B-Instruct is a Basque-adapted multimodal and multilingual instruct model built on top of Qwen3-VL-2B-Instruct, a powerful vision-language LLM capable of understanding and generating text and processing images. This model has been adapted by the HiTZ Research Center for improved performance on Basque (mono_eu variant), Galician and Catalan (multi variant) languages and interactive instruction following.

DISCLAIMER

These models are still under development. The released models are preliminary, and might be updated and improve in the future.

The released model contains several versions (revisions):

  • Multilingual (multi): in addition to Basque, the model has been also adapted to Galician and Catalan.
  • Basque monolingual (mono_eu): the Basque monolingual variant.

You can choose the model version by specifying the revision when loading the model with revision="multi". By default (main) the multilingual variant is downloaded.

Model Details

Model Description

Latxa Vision models are a family of Vision-Language Models based on Qwen3-VL. The models were adapted to different languages following Sainz et al. (2025) adaptation method. The models are released under different language variants: multi (it has been adapted to Basque, Galician and Catalan) and mono_eu (adapted only to Basque).

  • Developed by: HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
  • Funded by: Ikergaitu and ALIA projects (Basque and Spanish Government)
  • Model type: Vision-Language Instruct Model
  • Language(s) (NLP): Basque, Galician, Catalan, Spanish, English and more.
  • License: Apache 2.0
  • Finetuned from model: Qwen3-VL-2B-Instruct

Getting Started

Use the code below to get started with the model.

from transformers import pipeline

# Load the text and image to text pipeline
pipe = pipeline("image-text-to-text", model="HiTZ/Latxa-Qwen3-VL-2B-Instruct", revision='multi')

# Messages can be of many types
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png"},
            {"type": "text", "text": "What do we see in this image?"},
        ]
    }
]
output = pipe(messages)
print(output)

Uses

Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Regarding the multi variant, it was additionally adapted for Galician and Catalan.

Direct Use

Latxa Instruct models are trained to follow instructions or to work as chat assistants.

Out-of-Scope Use

The model is not intended for malicious activities, such as harming others or violating human rights. Any downstream application must comply with current laws and regulations. Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.

Bias, Risks, and Limitations

In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Latxa Corpus v1.1). Still, the model is based on Qwen3-VL models and can potentially carry the same bias, risk and limitations.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Training Details

Training Data

For training details, please, refer to our paper: Instructing Large Language Models for Low-Resource Languages: A Systematic Study for Basque

Evaluation

We evaluated the models using 5-shot settings on multiple-choice and generative tasks.

Task Qwen3-VL 2B 2B mono_eu 2B multi Qwen3-VL 4B 4B mono_eu 4B multi
arc_eu_challenge_mc 36.95 51.28 (+14.33) 55.20 (+18.25) 53.75 75.09 (+21.34) 75.34 (+21.59)
arc_eu_easy_mc 43.27 65.99 (+22.72) 69.95 (+26.68) 66.20 87.58 (+21.38) 87.58 (+21.38)
belebele_eus_Latn 46.00 65.44 (+19.44) 60.67 (+14.67) 69.67 80.67 (+11.00) 79.00 (+9.33)
bertaqa_eu_global 46.03 53.43 (+7.40) 56.81 (+10.78) 60.66 69.06 (+8.40) 69.65 (+8.99)
bertaqa_eu_local 37.27 42.51 (+5.24) 44.46 (+7.19) 40.27 53.43 (+13.16) 54.36 (+14.09)
bl2mp 49.11 87.94 (+38.83) 89.22 (+40.11) 55.89 90.17 (+34.28) 90.28 (+34.39)
eus_exams_eu 33.81 42.44 (+8.63) 42.81 (+9.00) 47.21 55.39 (+8.18) 56.40 (+9.19)
eus_proficiency 25.69 36.45 (+10.76) 36.58 (+10.89) 28.98 51.00 (+22.02) 51.77 (+22.79)
eus_trivia 35.04 40.41 (+5.37) 42.04 (+7.00) 44.49 56.27 (+11.78) 57.55 (+13.06)
mgsm_native_cot_eu 13.10 33.20 (+20.10) 34.00 (+20.90) 39.20 58.40 (+19.20) 62.40 (+23.20)
mmlu_eu 34.07 43.33 (+9.26) 45.93 (+11.86) 51.48 55.19 (+3.71) 57.41 (+5.93)
piqa_eu_mc 53.70 55.17 (+1.47) 54.08 (+0.38) 56.81 64.49 (+7.68) 68.68 (+11.87)
siqa_eu_mc 38.18 48.26 (+10.08) 50.31 (+12.13) 47.54 61.67 (+14.13) 62.59 (+15.05)
xstorycloze_eu 50.50 56.98 (+6.48) 57.05 (+6.55) 50.63 61.22 (+10.59) 61.81 (+11.18)
AVG EU 38.77 51.63 (+12.86) 52.79 (+14.02) 50.91 65.69 (+14.78) 66.77 (+15.86)

DISCLAIMER

These model are still under development. The results are only reported for Basque tasks, the results in the rest of the languages will be released in the near future.

Citation

@inproceedings{sainz-etal-2025-instructing,
    title = "Instructing Large Language Models for Low-Resource Languages: A Systematic Study for {B}asque",
    author = "Sainz, Oscar  and
      Perez, Naiara  and
      Etxaniz, Julen  and
      Fernandez de Landa, Joseba  and
      Aldabe, Itziar  and
      Garc{\'i}a-Ferrero, Iker  and
      Zabala, Aimar  and
      Azurmendi, Ekhi  and
      Rigau, German  and
      Agirre, Eneko  and
      Artetxe, Mikel  and
      Soroa, Aitor",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.1484/",
    doi = "10.18653/v1/2025.emnlp-main.1484",
    pages = "29124--29148",
    ISBN = "979-8-89176-332-6",
    abstract = "Instructing language models with user intent requires large instruction datasets, which are only available for a limited set of languages. In this paper, we explore alternatives to conventional instruction adaptation pipelines in low-resource scenarios. We assume a realistic scenario for low-resource languages, where only the following are available: corpora in the target language, existing open-weight multilingual base and instructed backbone LLMs, and synthetically generated instructions sampled from the instructed backbone. We present a comprehensive set of experiments for Basque that systematically study different combinations of these components evaluated on benchmarks and human preferences from 1,680 participants. Our conclusions show that target language corpora are essential, with synthetic instructions yielding robust models, and, most importantly, that using as backbone an instruction-tuned model outperforms using a base non-instructed model. Scaling up to Llama 3.1 Instruct 70B as backbone, our model comes near frontier models of much larger sizes for Basque, without using any Basque instructions. We release code, models, instruction datasets, and human preferences to support full reproducibility in future research on low-resource language adaptation."
}

Acknowledgements

This work has been partially supported by the Basque Government (Research group funding IT1570-22 and IKER-GAITU project), the Span- ish Ministry for Digital Transformation and of Civil Service, and the EU-funded NextGenera- tionEU Recovery, Transformation and Resilience Plan (ALIA project). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2024E01-042.

Downloads last month
132
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HiTZ/Latxa-Qwen3-VL-2B-Instruct

Finetuned
(80)
this model
Quantizations
1 model

Dataset used to train HiTZ/Latxa-Qwen3-VL-2B-Instruct