metadata
datasets:
- namelessai/helply
base_model: trillionlabs/Trillion-7B-preview
library_name: transformers
tags:
- pysch
- medical
- chat
- instruction
license: mit
language:
- en
- ko
Model Card for TrillionHelp
TrillionHelp uses trillionlabs/Trillion-7B-preview as the backbone.
Model Details
This model is fine-tuned on the namelessai/helply dataset designed to enhance mental health reasoning capabilities.
Model Description
This model was fine-tuned for assisting pyschologists in assiting patients.
- Developed by: Alex Scott
- Model type: Language Model, Adapter Model (available in a folder in the model repo)
- Finetuned from model: trillionlabs/Trillion-7B-preview
Usage (Adapter Only, full model snippet coming soon)
Use the code snippet below to load the base model and apply the adapter for inference:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the base model
base_model_name = "trillionlabs/Trillion-7B-preview"
adapter_path = "/path/to/adapter" # Replace with actual adapter path
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)
# Apply the adapter
model = PeftModel.from_pretrained(base_model, adapter_path)
model = model.merge_and_unload()
# Run inference
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))