Rizoner Email Writer (SFT)
Fine-tuned LoRA adapter for email writing, based on Qwen 2.5-14B-Instruct.
Training Details
- Base Model:
unsloth/Qwen2.5-14B-Instruct - Training Method: Supervised Fine-Tuning (SFT) with LoRA
- Training Data: 1067 email pairs from personal email history
- Training Framework: Unsloth + TRL
- Adapter Size: ~275MB
Usage
from unsloth import FastLanguageModel
# Load model with adapter
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="rizoner/rizoner-email-writer", # Your HuggingFace repo
max_seq_length=2048,
dtype=None,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
# Generate email
prompt = """<|im_start|>system
You are an expert email writer. Write professional, clear, and contextually appropriate emails.<|im_end|>
<|im_start|>user
Compose email to John at Company (re: Project Update)<|im_end|>
<|im_start|>assistant
"""
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
email = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(email)
API Usage (OpenAI-compatible)
from openai import OpenAI
client = OpenAI(
base_url="https://api-inference.huggingface.co/v1/",
api_key="hf_..." # Your HuggingFace token
)
response = client.chat.completions.create(
model="rizoner/rizoner-email-writer",
messages=[
{"role": "system", "content": "You are an expert email writer."},
{"role": "user", "content": "Compose email to John at Company (re: Meeting)"}
]
)
print(response.choices[0].message.content)
License
Apache 2.0
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for rizoner/rizoner-email-writer
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-14B-Instruct
Finetuned
unsloth/Qwen2.5-14B-Instruct