Easter Spirit 2B (GGUF)

License: MIT Format: GGUF Runtime: llama.cpp Base: Qwen2.5-2B-Instruct Hugging Face LinkedIn

Easter Spirit 2B is a compact seasonal model fine-tuned for warm, cheerful, and family-friendly text generation.
This repository provides GGUF builds optimized for local inference using the llama.cpp ecosystem and compatible runtimes.

Quick links


Overview

This is a tone/personality-focused model. It emphasizes warmth, friendliness, and seasonal flavor rather than deep reasoning or strict technical accuracy.

Recommended for:

  • Creative writing and short stories
  • Holiday / spring-themed roleplay
  • Light conversational assistants
  • Local demos and low-resource systems

Not optimized for:

  • Complex reasoning
  • Factual retrieval
  • Long-horizon planning

Model Details

  • Model name: Easter Spirit 2B
  • Base model: Qwen2.5-2B-Instruct
  • Fine-tuning: LoRA (merged)
  • Parameters: ~2B
  • Format: GGUF (llama.cpp compatible)
  • Language: English
  • License: MIT (base model license applies)

Quantized Files

All files are produced from the same merged model and differ only in quantization.

File Quantization Approx. Size
release_v1.TQ1_0.gguf TQ1_0 ~0.47 GB
release_v1.Q2_K.gguf Q2_K ~0.68 GB
release_v1.Q3_K_S.gguf Q3_K_S ~0.76 GB
release_v1.Q3_K_M.gguf Q3_K_M ~0.82 GB
release_v1.Q4_K_S.gguf Q4_K_S ~0.94 GB
release_v1.Q4_K_M.gguf Q4_K_M ~0.99 GB
release_v1.Q5_K_S.gguf Q5_K_S ~1.10 GB
release_v1.Q5_K_M.gguf Q5_K_M ~1.13 GB
release_v1.Q6_K.gguf Q6_K ~1.27 GB
release_v1.Q8_0.gguf Q8_0 ~1.65 GB

Recommendations

  • Default (balanced): Q4_K_M
  • Higher quality: Q5_K_M, Q6_K, Q8_0
  • Low RAM systems: Q3_K_M, Q2_K
  • Ultra-low memory (experimental): TQ1_0

Usage (llama.cpp)

CPU-only

./llama-cli \
  -m release_v1.Q4_K_M.gguf \
  -ngl 0 \
  -c 4096 \
  -p "Write a cozy springtime story inspired by Easter morning in a small town."
Downloads last month
-
GGUF
Model size
2B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support