You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

qwen72b-ar-lora

Model Description

This model is a fine-tuned version of Qwen/Qwen2.5-72B-Instruct, adapted for improved performance on Arabic language tasks. The fine-tuning focused on enhancing its capabilities in instruction-following and conversational AI within the Arabic context.

Intended Use

This model is intended for use as a general-purpose chatbot, for Arabic question answering, and for various text generation tasks. It's best used in a conversational format.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "AbdulmalekDS/qwen72b-ar-lora"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    dtype=torch.bfloat16,
    device_map="auto"
)

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "اشرح لي مفهوم الذكاء الاصطناعي"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)
Downloads last month
-
Safetensors
Model size
73B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AbdulmalekDS/qwen72b-ar-lora

Base model

Qwen/Qwen2.5-72B
Finetuned
(56)
this model