Model Card for Model ID
This is just for fun.
Training Data
Twitter Messages obtained from Gustavo Petro's account on Twitter
Training Procedure
ORPO
Usage
import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
pipeline,
)
model_repo="jhonparra18/PetroQA-OrpoMistral-7B-Instruct"
# QLoRA config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
)
# Tokenizer & Model
TOKENIZER = AutoTokenizer.from_pretrained(model_repo)
MODEL = AutoModelForCausalLM.from_pretrained(
model_repo,
quantization_config=bnb_config,
device_map="auto",
attn_implementation="eager"
)
TEXT_GENERATION_PIPELINE = pipeline(
model=MODEL,
tokenizer=TOKENIZER,
task="text-generation",
max_new_tokens=512,
do_sample=True,
temperature=0.1,
device_map="auto",
)
messages = [
{"role": "user", "content": "Como jefe de estado, qué opinas de Ivan Duque?"},
]
response=TEXT_GENERATION_PIPELINE(messages)
print(response)
- Downloads last month
- 1
Model tree for jhonparra18/PetroQA-OrpoMistral-7B-Instruct
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3