π₯ Dating & Relationship Advisor GGUF π₯
π Model Summary
This model is a casual, informal AI assistant designed to provide dating and relationship advice in a fun, unfiltered, and humorous way. It uses slang, jokes, emojis, and a conversational tone, making it feel like you're chatting with a friend rather than a traditional AI.
The model has been fine-tuned using a combination of:
- Crowdsourced dating advice (Reddit FAISS) π
- Expert relationship guides & books (PDF FAISS) π
It supports two main deployment methods:
- Google Drive Method β Loading the model from Google Drive.
- Hugging Face Method β Downloading & using the model from Hugging Face Hub.
π Model Details
- Model Type: GGUF-based LLaMA model
- Developed by: [Your Name / Organization]
- Language: English
- License: Apache 2.0 (or your choice)
- Base Model: LLaMA (Meta)
- Training Data: Relationship advice forums, dating guides, and expert PDFs
- Inference Framework:
llama-cpp-python
π How to Use the Model
1οΈβ£ Method 1: Load from Google Drive
Step 1: Install Dependencies
pip install llama-cpp-python
Step 2: Mount Google Drive & Load Model
from llama_cpp import Llama
import random
# Google Drive path
model_path = "/content/drive/MyDrive/Dating_LLM_GGUF/damn.gguf"
# Load the model
llm = Llama(
model_path=model_path,
n_gpu_layers=40,
n_ctx=2048,
system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
)
# Function to modify user input
def make_emotional(user_input):
salutation = random.choice(["Yo dude! π", "Hey buddy! π", "Listen up, my friend β€οΈ"])
suffix = " Give me some real, no-BS advice with emojis! ππ₯π"
return f"{salutation} {user_input} {suffix}"
# Run inference
user_input = "My partner doesn't like my friends. What should I do?"
emotional_prompt = make_emotional(user_input)
output = llm(emotional_prompt, max_tokens=200)
# Print the output
print(output["choices"][0]["text"])
2οΈβ£ Method 2: Load from Hugging Face
Step 1: Install Dependencies
pip install llama-cpp-python huggingface_hub
Step 2: Download Model from Hugging Face
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
# Download model from Hugging Face Hub
model_path = hf_hub_download(
repo_id="your-username/your-gguf-model",
filename="your_model.gguf",
cache_dir="./models"
)
# Load the model
llm = Llama(
model_path=model_path,
n_gpu_layers=40,
n_ctx=2048,
system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
)
# Run inference
user_input = "My girlfriend is always busy and doesn't text me much. What should I do?"
response = llm(user_input, max_tokens=200)
print(response["choices"][0]["text"])
πΎ Training Details
π Training Data
This model was trained on a diverse dataset, including: β Reddit FAISS β Extracts real-world dating discussions from Reddit posts. β PDF FAISS β Retrieves relationship expert opinions & guides from books.
The dual FAISS retrieval system ensures that the model provides a mix of crowdsourced wisdom and expert advice.
βοΈ Training Process
- Preprocessing: Cleaned, tokenized, and formatted text.
- Fine-Tuning: Used FP16 mixed precision for efficiency.
- Model Architecture: GGUF version of LLaMA.
π Evaluation & Performance
ποΈ Testing Data
The model was tested on real-life dating scenarios, such as:
- "My partner doesnβt want to move in together. What should I do?"
- "Is it normal to argue every day in a relationship?"
- "My crush left me on read π What now?"
π Metrics
- Engagement Score β Is the response conversational & engaging?
- Coherence β Does the response make sense?
- Slang & Humor β Does it feel natural?
π Results
β 90% of users found the responses engaging π β Feels like texting a real friend! β Sometimes overuses emojis ππ₯
π‘ Model Limitations & Risks
β οΈ Bias & Limitations
- This model reflects human biases found in dating advice.
- It may overgeneralize relationships & emotions.
- Not suitable for mental health or therapy.
π Recommendations
β Use it for fun, light-hearted guidance. β Don't rely on it for serious relationship decisions.
π Environmental Impact
- Hardware: NVIDIA A100 GPUs
- Training Time: ~24 hours
- Carbon Emission Estimate: 5 kg CO2
π License & Citation
π License
π Apache 2.0 (or your chosen license).
π Citation
@misc{yourname2025datingadvisor,
title={Dating & Relationship Advisor AI},
author={Your Name},
year={2025},
publisher={Hugging Face}
}
π’ Uploading to Hugging Face
Step 1οΈβ£: Install Hugging Face CLI
pip install huggingface_hub
Step 2οΈβ£: Log in
huggingface-cli login
Step 3οΈβ£: Create a Model Repo
- Go to Hugging Face Models β Click "New Model"
- Model ID:
your-username/your-gguf-model - License: Apache 2.0
- Tags:
llama,gguf,dating,relationships,llama.cpp
Step 4οΈβ£: Upload GGUF Model
huggingface-cli upload your-username/your-gguf-model your_model.gguf
- Downloads last month
- 4
We're not able to determine the quantization variants.
Model tree for LakithGR/QWEN2.5-3b-DAP
Base model
Qwen/Qwen2.5-3B