Model Card for Model ID

This is just for fun.

Training Data

Twitter Messages obtained from Gustavo Petro's account on Twitter

Training Procedure

ORPO

Usage

import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
    pipeline,
)

model_repo="jhonparra18/PetroQA-OrpoMistral-7B-Instruct"

# QLoRA config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
)

# Tokenizer & Model
TOKENIZER = AutoTokenizer.from_pretrained(model_repo)

MODEL = AutoModelForCausalLM.from_pretrained(
    model_repo,
    quantization_config=bnb_config,
    device_map="auto",
    attn_implementation="eager"
)

TEXT_GENERATION_PIPELINE = pipeline(
    model=MODEL,
    tokenizer=TOKENIZER,
    task="text-generation",
    max_new_tokens=512,
    do_sample=True,
    temperature=0.1,
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Como jefe de estado, qué opinas de Ivan Duque?"},
]
response=TEXT_GENERATION_PIPELINE(messages)
print(response)
Downloads last month
1
Safetensors
Model size
7B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jhonparra18/PetroQA-OrpoMistral-7B-Instruct

Finetuned
(317)
this model

Dataset used to train jhonparra18/PetroQA-OrpoMistral-7B-Instruct