QWEN2.5-3b-DAP / README.md
lbourdois's picture
Improve language tag
b461488 verified
|
raw
history blame
6.04 kB
metadata
language:
  - zho
  - eng
  - fra
  - spa
  - por
  - deu
  - ita
  - rus
  - jpn
  - kor
  - vie
  - tha
  - ara
base_model:
  - Qwen/Qwen2.5-3B
new_version: Qwen/Qwen2.5-3B
library_name: sentence-transformers

πŸ”₯ Dating & Relationship Advisor GGUF πŸ”₯

πŸ“Œ Model Summary

This model is a casual, informal AI assistant designed to provide dating and relationship advice in a fun, unfiltered, and humorous way. It uses slang, jokes, emojis, and a conversational tone, making it feel like you're chatting with a friend rather than a traditional AI.

The model has been fine-tuned using a combination of:

  • Crowdsourced dating advice (Reddit FAISS) πŸ“Œ
  • Expert relationship guides & books (PDF FAISS) πŸ“š

It supports two main deployment methods:

  1. Google Drive Method – Loading the model from Google Drive.
  2. Hugging Face Method – Downloading & using the model from Hugging Face Hub.

πŸ“š Model Details

  • Model Type: GGUF-based LLaMA model
  • Developed by: [Your Name / Organization]
  • Language: English
  • License: Apache 2.0 (or your choice)
  • Base Model: LLaMA (Meta)
  • Training Data: Relationship advice forums, dating guides, and expert PDFs
  • Inference Framework: llama-cpp-python

πŸš€ How to Use the Model

1️⃣ Method 1: Load from Google Drive

Step 1: Install Dependencies

pip install llama-cpp-python

Step 2: Mount Google Drive & Load Model

from llama_cpp import Llama
import random

# Google Drive path
model_path = "/content/drive/MyDrive/Dating_LLM_GGUF/damn.gguf"

# Load the model
llm = Llama(
    model_path=model_path,
    n_gpu_layers=40,
    n_ctx=2048,
    system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
)

# Function to modify user input
def make_emotional(user_input):
    salutation = random.choice(["Yo dude! 😎", "Hey buddy! πŸ™Œ", "Listen up, my friend ❀️"])
    suffix = " Give me some real, no-BS advice with emojis! πŸ˜‚πŸ”₯πŸ’–"
    return f"{salutation} {user_input} {suffix}"

# Run inference
user_input = "My partner doesn't like my friends. What should I do?"
emotional_prompt = make_emotional(user_input)
output = llm(emotional_prompt, max_tokens=200)

# Print the output
print(output["choices"][0]["text"])

2️⃣ Method 2: Load from Hugging Face

Step 1: Install Dependencies

pip install llama-cpp-python huggingface_hub

Step 2: Download Model from Hugging Face

from llama_cpp import Llama
from huggingface_hub import hf_hub_download

# Download model from Hugging Face Hub
model_path = hf_hub_download(
    repo_id="your-username/your-gguf-model",
    filename="your_model.gguf",
    cache_dir="./models"
)

# Load the model
llm = Llama(
    model_path=model_path,
    n_gpu_layers=40,
    n_ctx=2048,
    system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
)

# Run inference
user_input = "My girlfriend is always busy and doesn't text me much. What should I do?"
response = llm(user_input, max_tokens=200)
print(response["choices"][0]["text"])

πŸ’Ύ Training Details

πŸ“š Training Data

This model was trained on a diverse dataset, including: βœ… Reddit FAISS – Extracts real-world dating discussions from Reddit posts. βœ… PDF FAISS – Retrieves relationship expert opinions & guides from books.

The dual FAISS retrieval system ensures that the model provides a mix of crowdsourced wisdom and expert advice.

βš™οΈ Training Process

  • Preprocessing: Cleaned, tokenized, and formatted text.
  • Fine-Tuning: Used FP16 mixed precision for efficiency.
  • Model Architecture: GGUF version of LLaMA.

πŸ“Š Evaluation & Performance

πŸ—’οΈ Testing Data

The model was tested on real-life dating scenarios, such as:

  • "My partner doesn’t want to move in together. What should I do?"
  • "Is it normal to argue every day in a relationship?"
  • "My crush left me on read 😭 What now?"

πŸ“Œ Metrics

  • Engagement Score – Is the response conversational & engaging?
  • Coherence – Does the response make sense?
  • Slang & Humor – Does it feel natural?

πŸ“ˆ Results

βœ… 90% of users found the responses engaging πŸŽ‰ βœ… Feels like texting a real friend! βœ… Sometimes overuses emojis πŸ˜‚πŸ”₯


πŸ›‘ Model Limitations & Risks

⚠️ Bias & Limitations

  • This model reflects human biases found in dating advice.
  • It may overgeneralize relationships & emotions.
  • Not suitable for mental health or therapy.

πŸ“Œ Recommendations

βœ… Use it for fun, light-hearted guidance. ❌ Don't rely on it for serious relationship decisions.


🌍 Environmental Impact

  • Hardware: NVIDIA A100 GPUs
  • Training Time: ~24 hours
  • Carbon Emission Estimate: 5 kg CO2

πŸ’œ License & Citation

πŸ“š License

πŸ“ Apache 2.0 (or your chosen license).

πŸ“’ Citation

@misc{yourname2025datingadvisor,
  title={Dating & Relationship Advisor AI},
  author={Your Name},
  year={2025},
  publisher={Hugging Face}
}

πŸ“’ Uploading to Hugging Face

Step 1️⃣: Install Hugging Face CLI

pip install huggingface_hub

Step 2️⃣: Log in

huggingface-cli login

Step 3️⃣: Create a Model Repo

  • Go to Hugging Face Models β†’ Click "New Model"
  • Model ID: your-username/your-gguf-model
  • License: Apache 2.0
  • Tags: llama, gguf, dating, relationships, llama.cpp

Step 4️⃣: Upload GGUF Model

huggingface-cli upload your-username/your-gguf-model your_model.gguf