metadata
license: mit
datasets:
- millat/StudyAbroadGPT-Dataset
language:
- en
base_model:
- unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- unsloth
- trl
- sft
StudyAbroadGPT-7B
A fine-tuned version of Mistral-7B optimized for study abroad assistance.
Model Details
- Base Model: Mistral-7B
- Training: LoRA fine-tuning (r=16, alpha=32)
- Quantization: 4-bit
- Max Length: 2048 tokens
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("millat/StudyAbroadGPT-7B")
tokenizer = AutoTokenizer.from_pretrained("millat/StudyAbroadGPT-7B")