TinyLlama-1.1B Quote Generator (GGUF)
This repository contains a fine-tuned version of the TinyLlama/TinyLlama-1.1B-Chat-v1.0 model, quantized to GGUF (Q4_K_M) format.
This model was trained to generate short, original quotes based on a keyword, using the Abirate/english_quotes dataset.
Performance Note
This GGUF model is designed for efficient inference on a CPU. However, performance is highly dependent on your hardware. On low-power or shared CPUs (such as the Hugging Face Spaces "CPU Basic" free tier), inference can be prohibitively slow.
For best results, use this model on a local machine with a modern multi-core CPU.
β‘ How to Use (Local CPU)
This model is intended to be used with llama-cpp-python.
1. Installation
pip install llama-cpp-python huggingface_hub
2. Python Example
This code will download and run the model on your local CPU.
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
import os
repo_id = "bkqz/tinyllama-quotes-generator-gguf"
gguf_file = "tinyllama-quotes-Q4_K_M.gguf"
# 1. Download the model
model_path = hf_hub_download(
repo_id=repo_id,
filename=gguf_file
)
# 2. Load the model
llm = Llama(
model_path=model_path,
n_ctx=512, # Context window
n_threads=os.cpu_count() - 1, # Use all available CPU cores
n_gpu_layers=0 # Use 0 for CPU-only
)
# 3. Set your keyword
keyword = "success"
# 4. Format the prompt EXACTLY as shown
prompt = f"Keyword: {keyword}\nQuote:"
# 5. Generate the quote
output = llm.create_completion(
prompt,
max_tokens=80,
temperature=0.7,
top_p=0.9,
stop=["\n", "Keyword:"], # Stop at a newline
echo=False
)
quote = output["choices"][0]["text"].strip()
print(f"Keyword: {keyword}")
print(f"Generated Quote: {quote}")
π¬ Prompt Format
This model was trained on a very specific format. For best results, your prompt must end with \nQuote:.
Keyword: [YOUR_KEYWORD]\nQuote:
The model will generate a single quote and append - Unknown.
π οΈ Training & Conversion Process
This model was created using the 01_finetune_and_gguf_conversion.ipynb notebook.
- Fine-Tuning: The base
TinyLlamamodel was fine-tuned on a T4 GPU using QLoRA. - Dataset: The
Abirate/english_quotesdataset was "exploded" so that each(quote, tag)pair became a unique training example. - Format: The training text was formatted as
Keyword: [tag]\nQuote: [quote] - Unknownto prevent the model from adding real authors. - Merging: The trained LoRA adapters were merged into a full-precision
float16base model. - Conversion: This
float16model was converted to GGUF usingllama.cpp. This involved a two-step process:- Converting to an intermediate
f16GGUF usingconvert_hf_to_gguf.py. - Compressing the
f16file toQ4_K_Musing thellama-quantizeexecutable.
- Converting to an intermediate
- Downloads last month
- 24
4-bit
Model tree for bkqz/tinyllama-quotes-generator-gguf
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0