TinyLlama-1.1B Quote Generator (GGUF)

This repository contains a fine-tuned version of the TinyLlama/TinyLlama-1.1B-Chat-v1.0 model, quantized to GGUF (Q4_K_M) format.

This model was trained to generate short, original quotes based on a keyword, using the Abirate/english_quotes dataset.

Performance Note

This GGUF model is designed for efficient inference on a CPU. However, performance is highly dependent on your hardware. On low-power or shared CPUs (such as the Hugging Face Spaces "CPU Basic" free tier), inference can be prohibitively slow.

For best results, use this model on a local machine with a modern multi-core CPU.

⚑ How to Use (Local CPU)

This model is intended to be used with llama-cpp-python.

1. Installation

pip install llama-cpp-python huggingface_hub

2. Python Example

This code will download and run the model on your local CPU.

from llama_cpp import Llama
from huggingface_hub import hf_hub_download
import os

repo_id = "bkqz/tinyllama-quotes-generator-gguf"
gguf_file = "tinyllama-quotes-Q4_K_M.gguf"

# 1. Download the model
model_path = hf_hub_download(
    repo_id=repo_id,
    filename=gguf_file
)

# 2. Load the model
llm = Llama(
    model_path=model_path,
    n_ctx=512,      # Context window
    n_threads=os.cpu_count() - 1, # Use all available CPU cores
    n_gpu_layers=0  # Use 0 for CPU-only
)

# 3. Set your keyword
keyword = "success"

# 4. Format the prompt EXACTLY as shown
prompt = f"Keyword: {keyword}\nQuote:"

# 5. Generate the quote
output = llm.create_completion(
    prompt,
    max_tokens=80,
    temperature=0.7,
    top_p=0.9,
    stop=["\n", "Keyword:"], # Stop at a newline
    echo=False
)

quote = output["choices"][0]["text"].strip()
print(f"Keyword: {keyword}")
print(f"Generated Quote: {quote}")

πŸ’¬ Prompt Format

This model was trained on a very specific format. For best results, your prompt must end with \nQuote:.

Keyword: [YOUR_KEYWORD]\nQuote:

The model will generate a single quote and append - Unknown.

πŸ› οΈ Training & Conversion Process

This model was created using the 01_finetune_and_gguf_conversion.ipynb notebook.

  1. Fine-Tuning: The base TinyLlama model was fine-tuned on a T4 GPU using QLoRA.
  2. Dataset: The Abirate/english_quotes dataset was "exploded" so that each (quote, tag) pair became a unique training example.
  3. Format: The training text was formatted as Keyword: [tag]\nQuote: [quote] - Unknown to prevent the model from adding real authors.
  4. Merging: The trained LoRA adapters were merged into a full-precision float16 base model.
  5. Conversion: This float16 model was converted to GGUF using llama.cpp. This involved a two-step process:
    • Converting to an intermediate f16 GGUF using convert_hf_to_gguf.py.
    • Compressing the f16 file to Q4_K_M using the llama-quantize executable.
Downloads last month
24
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for bkqz/tinyllama-quotes-generator-gguf

Quantized
(120)
this model