lbourdois commited on
Commit
b461488
Β·
verified Β·
1 Parent(s): aa6fdac

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +206 -194
README.md CHANGED
@@ -1,195 +1,207 @@
1
- ---
2
- language:
3
- - en
4
- base_model:
5
- - Qwen/Qwen2.5-3B
6
- new_version: Qwen/Qwen2.5-3B
7
- library_name: sentence-transformers
8
- ---
9
- # πŸ”₯ Dating & Relationship Advisor GGUF πŸ”₯
10
-
11
- ## πŸ“Œ Model Summary
12
- This model is a **casual, informal AI assistant** designed to provide **dating and relationship advice** in a fun, unfiltered, and humorous way. It uses **slang, jokes, emojis, and a conversational tone**, making it feel like you're chatting with a friend rather than a traditional AI.
13
-
14
- The model has been **fine-tuned** using a combination of:
15
- - **Crowdsourced dating advice (Reddit FAISS)** πŸ“Œ
16
- - **Expert relationship guides & books (PDF FAISS)** πŸ“š
17
-
18
- It supports **two main deployment methods**:
19
- 1. **Google Drive Method** – Loading the model from Google Drive.
20
- 2. **Hugging Face Method** – Downloading & using the model from Hugging Face Hub.
21
-
22
- ---
23
-
24
- ## πŸ“š Model Details
25
- - **Model Type:** GGUF-based LLaMA model
26
- - **Developed by:** [Your Name / Organization]
27
- - **Language:** English
28
- - **License:** Apache 2.0 (or your choice)
29
- - **Base Model:** LLaMA (Meta)
30
- - **Training Data:** Relationship advice forums, dating guides, and expert PDFs
31
- - **Inference Framework:** `llama-cpp-python`
32
-
33
- ---
34
-
35
- ## πŸš€ How to Use the Model
36
- ### **1️⃣ Method 1: Load from Google Drive**
37
- #### **Step 1: Install Dependencies**
38
- ```bash
39
- pip install llama-cpp-python
40
- ```
41
- #### **Step 2: Mount Google Drive & Load Model**
42
- ```python
43
- from llama_cpp import Llama
44
- import random
45
-
46
- # Google Drive path
47
- model_path = "/content/drive/MyDrive/Dating_LLM_GGUF/damn.gguf"
48
-
49
- # Load the model
50
- llm = Llama(
51
- model_path=model_path,
52
- n_gpu_layers=40,
53
- n_ctx=2048,
54
- system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
55
- )
56
-
57
- # Function to modify user input
58
- def make_emotional(user_input):
59
- salutation = random.choice(["Yo dude! 😎", "Hey buddy! πŸ™Œ", "Listen up, my friend ❀️"])
60
- suffix = " Give me some real, no-BS advice with emojis! πŸ˜‚πŸ”₯πŸ’–"
61
- return f"{salutation} {user_input} {suffix}"
62
-
63
- # Run inference
64
- user_input = "My partner doesn't like my friends. What should I do?"
65
- emotional_prompt = make_emotional(user_input)
66
- output = llm(emotional_prompt, max_tokens=200)
67
-
68
- # Print the output
69
- print(output["choices"][0]["text"])
70
- ```
71
-
72
- ---
73
-
74
- ### **2️⃣ Method 2: Load from Hugging Face**
75
- #### **Step 1: Install Dependencies**
76
- ```bash
77
- pip install llama-cpp-python huggingface_hub
78
- ```
79
- #### **Step 2: Download Model from Hugging Face**
80
- ```python
81
- from llama_cpp import Llama
82
- from huggingface_hub import hf_hub_download
83
-
84
- # Download model from Hugging Face Hub
85
- model_path = hf_hub_download(
86
- repo_id="your-username/your-gguf-model",
87
- filename="your_model.gguf",
88
- cache_dir="./models"
89
- )
90
-
91
- # Load the model
92
- llm = Llama(
93
- model_path=model_path,
94
- n_gpu_layers=40,
95
- n_ctx=2048,
96
- system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
97
- )
98
-
99
- # Run inference
100
- user_input = "My girlfriend is always busy and doesn't text me much. What should I do?"
101
- response = llm(user_input, max_tokens=200)
102
- print(response["choices"][0]["text"])
103
- ```
104
-
105
- ---
106
-
107
- ## πŸ’Ύ Training Details
108
- ### **πŸ“š Training Data**
109
- This model was trained on a diverse dataset, including:
110
- βœ… **Reddit FAISS** – Extracts **real-world** dating discussions from **Reddit posts**.
111
- βœ… **PDF FAISS** – Retrieves relationship **expert opinions & guides** from books.
112
-
113
- The **dual FAISS retrieval system** ensures that the model provides a mix of **crowdsourced wisdom** and **expert advice**.
114
-
115
- ### **βš™οΈ Training Process**
116
- - **Preprocessing:** Cleaned, tokenized, and formatted text.
117
- - **Fine-Tuning:** Used **FP16 mixed precision** for efficiency.
118
- - **Model Architecture:** GGUF version of LLaMA.
119
-
120
- ---
121
-
122
- ## πŸ“Š Evaluation & Performance
123
- ### **πŸ—’οΈ Testing Data**
124
- The model was tested on **real-life dating scenarios**, such as:
125
- - **"My partner doesn’t want to move in together. What should I do?"**
126
- - **"Is it normal to argue every day in a relationship?"**
127
- - **"My crush left me on read 😭 What now?"**
128
-
129
- ### **πŸ“Œ Metrics**
130
- - **Engagement Score** – Is the response conversational & engaging?
131
- - **Coherence** – Does the response make sense?
132
- - **Slang & Humor** – Does it feel natural?
133
-
134
- ### **πŸ“ˆ Results**
135
- βœ… **90% of users found the responses engaging** πŸŽ‰
136
- βœ… **Feels like texting a real friend!**
137
- βœ… **Sometimes overuses emojis πŸ˜‚πŸ”₯**
138
-
139
- ---
140
-
141
- ## πŸ›‘ Model Limitations & Risks
142
- ### **⚠️ Bias & Limitations**
143
- - This model **reflects human biases** found in dating advice.
144
- - It may **overgeneralize** relationships & emotions.
145
- - **Not suitable for mental health or therapy**.
146
-
147
- ### **πŸ“Œ Recommendations**
148
- βœ… Use it for **fun, light-hearted guidance**.
149
- ❌ Don't rely on it for **serious relationship decisions**.
150
-
151
- ---
152
-
153
- ## 🌍 Environmental Impact
154
- - **Hardware:** NVIDIA A100 GPUs
155
- - **Training Time:** ~24 hours
156
- - **Carbon Emission Estimate:** **5 kg CO2**
157
-
158
- ---
159
-
160
- ## πŸ’œ License & Citation
161
- ### **πŸ“š License**
162
- πŸ“ Apache 2.0 (or your chosen license).
163
-
164
- ### **πŸ“’ Citation**
165
- ```bibtex
166
- @misc{yourname2025datingadvisor,
167
- title={Dating & Relationship Advisor AI},
168
- author={Your Name},
169
- year={2025},
170
- publisher={Hugging Face}
171
- }
172
- ```
173
-
174
- ---
175
-
176
- ## πŸ“’ Uploading to Hugging Face
177
- ### **Step 1️⃣: Install Hugging Face CLI**
178
- ```bash
179
- pip install huggingface_hub
180
- ```
181
- ### **Step 2️⃣: Log in**
182
- ```bash
183
- huggingface-cli login
184
- ```
185
-
186
- ### **Step 3️⃣: Create a Model Repo**
187
- - Go to [Hugging Face Models](https://huggingface.co/models) β†’ Click **"New Model"**
188
- - **Model ID:** `your-username/your-gguf-model`
189
- - **License:** Apache 2.0
190
- - **Tags:** `llama`, `gguf`, `dating`, `relationships`, `llama.cpp`
191
-
192
- ### **Step 4️⃣: Upload GGUF Model**
193
- ```bash
194
- huggingface-cli upload your-username/your-gguf-model your_model.gguf
 
 
 
 
 
 
 
 
 
 
 
 
195
  ```
 
1
+ ---
2
+ language:
3
+ - zho
4
+ - eng
5
+ - fra
6
+ - spa
7
+ - por
8
+ - deu
9
+ - ita
10
+ - rus
11
+ - jpn
12
+ - kor
13
+ - vie
14
+ - tha
15
+ - ara
16
+ base_model:
17
+ - Qwen/Qwen2.5-3B
18
+ new_version: Qwen/Qwen2.5-3B
19
+ library_name: sentence-transformers
20
+ ---
21
+ # πŸ”₯ Dating & Relationship Advisor GGUF πŸ”₯
22
+
23
+ ## πŸ“Œ Model Summary
24
+ This model is a **casual, informal AI assistant** designed to provide **dating and relationship advice** in a fun, unfiltered, and humorous way. It uses **slang, jokes, emojis, and a conversational tone**, making it feel like you're chatting with a friend rather than a traditional AI.
25
+
26
+ The model has been **fine-tuned** using a combination of:
27
+ - **Crowdsourced dating advice (Reddit FAISS)** πŸ“Œ
28
+ - **Expert relationship guides & books (PDF FAISS)** πŸ“š
29
+
30
+ It supports **two main deployment methods**:
31
+ 1. **Google Drive Method** – Loading the model from Google Drive.
32
+ 2. **Hugging Face Method** – Downloading & using the model from Hugging Face Hub.
33
+
34
+ ---
35
+
36
+ ## πŸ“š Model Details
37
+ - **Model Type:** GGUF-based LLaMA model
38
+ - **Developed by:** [Your Name / Organization]
39
+ - **Language:** English
40
+ - **License:** Apache 2.0 (or your choice)
41
+ - **Base Model:** LLaMA (Meta)
42
+ - **Training Data:** Relationship advice forums, dating guides, and expert PDFs
43
+ - **Inference Framework:** `llama-cpp-python`
44
+
45
+ ---
46
+
47
+ ## πŸš€ How to Use the Model
48
+ ### **1️⃣ Method 1: Load from Google Drive**
49
+ #### **Step 1: Install Dependencies**
50
+ ```bash
51
+ pip install llama-cpp-python
52
+ ```
53
+ #### **Step 2: Mount Google Drive & Load Model**
54
+ ```python
55
+ from llama_cpp import Llama
56
+ import random
57
+
58
+ # Google Drive path
59
+ model_path = "/content/drive/MyDrive/Dating_LLM_GGUF/damn.gguf"
60
+
61
+ # Load the model
62
+ llm = Llama(
63
+ model_path=model_path,
64
+ n_gpu_layers=40,
65
+ n_ctx=2048,
66
+ system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
67
+ )
68
+
69
+ # Function to modify user input
70
+ def make_emotional(user_input):
71
+ salutation = random.choice(["Yo dude! 😎", "Hey buddy! πŸ™Œ", "Listen up, my friend ❀️"])
72
+ suffix = " Give me some real, no-BS advice with emojis! πŸ˜‚πŸ”₯πŸ’–"
73
+ return f"{salutation} {user_input} {suffix}"
74
+
75
+ # Run inference
76
+ user_input = "My partner doesn't like my friends. What should I do?"
77
+ emotional_prompt = make_emotional(user_input)
78
+ output = llm(emotional_prompt, max_tokens=200)
79
+
80
+ # Print the output
81
+ print(output["choices"][0]["text"])
82
+ ```
83
+
84
+ ---
85
+
86
+ ### **2️⃣ Method 2: Load from Hugging Face**
87
+ #### **Step 1: Install Dependencies**
88
+ ```bash
89
+ pip install llama-cpp-python huggingface_hub
90
+ ```
91
+ #### **Step 2: Download Model from Hugging Face**
92
+ ```python
93
+ from llama_cpp import Llama
94
+ from huggingface_hub import hf_hub_download
95
+
96
+ # Download model from Hugging Face Hub
97
+ model_path = hf_hub_download(
98
+ repo_id="your-username/your-gguf-model",
99
+ filename="your_model.gguf",
100
+ cache_dir="./models"
101
+ )
102
+
103
+ # Load the model
104
+ llm = Llama(
105
+ model_path=model_path,
106
+ n_gpu_layers=40,
107
+ n_ctx=2048,
108
+ system_message="You are an unfiltered, informal AI assistant. Use slang, humor, and emojis!"
109
+ )
110
+
111
+ # Run inference
112
+ user_input = "My girlfriend is always busy and doesn't text me much. What should I do?"
113
+ response = llm(user_input, max_tokens=200)
114
+ print(response["choices"][0]["text"])
115
+ ```
116
+
117
+ ---
118
+
119
+ ## πŸ’Ύ Training Details
120
+ ### **πŸ“š Training Data**
121
+ This model was trained on a diverse dataset, including:
122
+ βœ… **Reddit FAISS** – Extracts **real-world** dating discussions from **Reddit posts**.
123
+ βœ… **PDF FAISS** – Retrieves relationship **expert opinions & guides** from books.
124
+
125
+ The **dual FAISS retrieval system** ensures that the model provides a mix of **crowdsourced wisdom** and **expert advice**.
126
+
127
+ ### **βš™οΈ Training Process**
128
+ - **Preprocessing:** Cleaned, tokenized, and formatted text.
129
+ - **Fine-Tuning:** Used **FP16 mixed precision** for efficiency.
130
+ - **Model Architecture:** GGUF version of LLaMA.
131
+
132
+ ---
133
+
134
+ ## πŸ“Š Evaluation & Performance
135
+ ### **πŸ—’οΈ Testing Data**
136
+ The model was tested on **real-life dating scenarios**, such as:
137
+ - **"My partner doesn’t want to move in together. What should I do?"**
138
+ - **"Is it normal to argue every day in a relationship?"**
139
+ - **"My crush left me on read 😭 What now?"**
140
+
141
+ ### **πŸ“Œ Metrics**
142
+ - **Engagement Score** – Is the response conversational & engaging?
143
+ - **Coherence** – Does the response make sense?
144
+ - **Slang & Humor** – Does it feel natural?
145
+
146
+ ### **πŸ“ˆ Results**
147
+ βœ… **90% of users found the responses engaging** πŸŽ‰
148
+ βœ… **Feels like texting a real friend!**
149
+ βœ… **Sometimes overuses emojis πŸ˜‚πŸ”₯**
150
+
151
+ ---
152
+
153
+ ## πŸ›‘ Model Limitations & Risks
154
+ ### **⚠️ Bias & Limitations**
155
+ - This model **reflects human biases** found in dating advice.
156
+ - It may **overgeneralize** relationships & emotions.
157
+ - **Not suitable for mental health or therapy**.
158
+
159
+ ### **πŸ“Œ Recommendations**
160
+ βœ… Use it for **fun, light-hearted guidance**.
161
+ ❌ Don't rely on it for **serious relationship decisions**.
162
+
163
+ ---
164
+
165
+ ## 🌍 Environmental Impact
166
+ - **Hardware:** NVIDIA A100 GPUs
167
+ - **Training Time:** ~24 hours
168
+ - **Carbon Emission Estimate:** **5 kg CO2**
169
+
170
+ ---
171
+
172
+ ## πŸ’œ License & Citation
173
+ ### **πŸ“š License**
174
+ πŸ“ Apache 2.0 (or your chosen license).
175
+
176
+ ### **πŸ“’ Citation**
177
+ ```bibtex
178
+ @misc{yourname2025datingadvisor,
179
+ title={Dating & Relationship Advisor AI},
180
+ author={Your Name},
181
+ year={2025},
182
+ publisher={Hugging Face}
183
+ }
184
+ ```
185
+
186
+ ---
187
+
188
+ ## πŸ“’ Uploading to Hugging Face
189
+ ### **Step 1️⃣: Install Hugging Face CLI**
190
+ ```bash
191
+ pip install huggingface_hub
192
+ ```
193
+ ### **Step 2️⃣: Log in**
194
+ ```bash
195
+ huggingface-cli login
196
+ ```
197
+
198
+ ### **Step 3️⃣: Create a Model Repo**
199
+ - Go to [Hugging Face Models](https://huggingface.co/models) β†’ Click **"New Model"**
200
+ - **Model ID:** `your-username/your-gguf-model`
201
+ - **License:** Apache 2.0
202
+ - **Tags:** `llama`, `gguf`, `dating`, `relationships`, `llama.cpp`
203
+
204
+ ### **Step 4️⃣: Upload GGUF Model**
205
+ ```bash
206
+ huggingface-cli upload your-username/your-gguf-model your_model.gguf
207
  ```