File size: 6,040 Bytes
b461488 bf28c81 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
---
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B
new_version: Qwen/Qwen2.5-3B
library_name: sentence-transformers
---
# π₯ Dating & Relationship Advisor GGUF π₯
## π Model Summary
This model is a **casual, informal AI assistant** designed to provide **dating and relationship advice** in a fun, unfiltered, and humorous way. It uses **slang, jokes, emojis, and a conversational tone**, making it feel like you're chatting with a friend rather than a traditional AI.
The model has been **fine-tuned** using a combination of:
- **Crowdsourced dating advice (Reddit FAISS)** π
- **Expert relationship guides & books (PDF FAISS)** π
It supports **two main deployment methods**:
1. **Google Drive Method** β Loading the model from Google Drive.
2. **Hugging Face Method** β Downloading & using the model from Hugging Face Hub.
---
## π Model Details
- **Model Type:** GGUF-based LLaMA model
- **Developed by:** [Your Name / Organization]
- **Language:** English
- **License:** Apache 2.0 (or your choice)
- **Base Model:** LLaMA (Meta)
- **Training Data:** Relationship advice forums, dating guides, and expert PDFs
- **Inference Framework:** `llama-cpp-python`
---
## π How to Use the Model
### **1οΈβ£ Method 1: Load from Google Drive**
#### **Step 1: Install Dependencies**
```bash
pip install llama-cpp-python
```
#### **Step 2: Mount Google Drive & Load Model**
```python
from llama_cpp import Llama
import random
# Google Drive path
model_path = "/content/drive/MyDrive/Dating_LLM_GGUF/damn.gguf"
# Load the model
llm = Llama(
model_path=model_path,
n_gpu_layers=40,
n_ctx=2048,
system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
)
# Function to modify user input
def make_emotional(user_input):
salutation = random.choice(["Yo dude! π", "Hey buddy! π", "Listen up, my friend β€οΈ"])
suffix = " Give me some real, no-BS advice with emojis! ππ₯π"
return f"{salutation} {user_input} {suffix}"
# Run inference
user_input = "My partner doesn't like my friends. What should I do?"
emotional_prompt = make_emotional(user_input)
output = llm(emotional_prompt, max_tokens=200)
# Print the output
print(output["choices"][0]["text"])
```
---
### **2οΈβ£ Method 2: Load from Hugging Face**
#### **Step 1: Install Dependencies**
```bash
pip install llama-cpp-python huggingface_hub
```
#### **Step 2: Download Model from Hugging Face**
```python
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
# Download model from Hugging Face Hub
model_path = hf_hub_download(
repo_id="your-username/your-gguf-model",
filename="your_model.gguf",
cache_dir="./models"
)
# Load the model
llm = Llama(
model_path=model_path,
n_gpu_layers=40,
n_ctx=2048,
system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
)
# Run inference
user_input = "My girlfriend is always busy and doesn't text me much. What should I do?"
response = llm(user_input, max_tokens=200)
print(response["choices"][0]["text"])
```
---
## πΎ Training Details
### **π Training Data**
This model was trained on a diverse dataset, including:
β
**Reddit FAISS** β Extracts **real-world** dating discussions from **Reddit posts**.
β
**PDF FAISS** β Retrieves relationship **expert opinions & guides** from books.
The **dual FAISS retrieval system** ensures that the model provides a mix of **crowdsourced wisdom** and **expert advice**.
### **βοΈ Training Process**
- **Preprocessing:** Cleaned, tokenized, and formatted text.
- **Fine-Tuning:** Used **FP16 mixed precision** for efficiency.
- **Model Architecture:** GGUF version of LLaMA.
---
## π Evaluation & Performance
### **ποΈ Testing Data**
The model was tested on **real-life dating scenarios**, such as:
- **"My partner doesnβt want to move in together. What should I do?"**
- **"Is it normal to argue every day in a relationship?"**
- **"My crush left me on read π What now?"**
### **π Metrics**
- **Engagement Score** β Is the response conversational & engaging?
- **Coherence** β Does the response make sense?
- **Slang & Humor** β Does it feel natural?
### **π Results**
β
**90% of users found the responses engaging** π
β
**Feels like texting a real friend!**
β
**Sometimes overuses emojis ππ₯**
---
## π‘ Model Limitations & Risks
### **β οΈ Bias & Limitations**
- This model **reflects human biases** found in dating advice.
- It may **overgeneralize** relationships & emotions.
- **Not suitable for mental health or therapy**.
### **π Recommendations**
β
Use it for **fun, light-hearted guidance**.
β Don't rely on it for **serious relationship decisions**.
---
## π Environmental Impact
- **Hardware:** NVIDIA A100 GPUs
- **Training Time:** ~24 hours
- **Carbon Emission Estimate:** **5 kg CO2**
---
## π License & Citation
### **π License**
π Apache 2.0 (or your chosen license).
### **π Citation**
```bibtex
@misc{yourname2025datingadvisor,
title={Dating & Relationship Advisor AI},
author={Your Name},
year={2025},
publisher={Hugging Face}
}
```
---
## π’ Uploading to Hugging Face
### **Step 1οΈβ£: Install Hugging Face CLI**
```bash
pip install huggingface_hub
```
### **Step 2οΈβ£: Log in**
```bash
huggingface-cli login
```
### **Step 3οΈβ£: Create a Model Repo**
- Go to [Hugging Face Models](https://huggingface.co/models) β Click **"New Model"**
- **Model ID:** `your-username/your-gguf-model`
- **License:** Apache 2.0
- **Tags:** `llama`, `gguf`, `dating`, `relationships`, `llama.cpp`
### **Step 4οΈβ£: Upload GGUF Model**
```bash
huggingface-cli upload your-username/your-gguf-model your_model.gguf
``` |