BadApple-LLaMA-nano

A 3.5M-parameter LLaMA that memorized every frame of the Bad Apple!! music video as ASCII art (1,745 frames, ~8,760 tokens each). Give it a frame number, get 58 rows of . and @ characters back. 99.78% average accuracy across 1,745 frames.

Background

This started as a joke. The first version used a custom architecture and tokenizer where each frame was a single token. It hit 100% accuracy in a few minutes of CPU training. Too easy, not interesting.

The next goal: do the same thing but make it llama.cpp-compatible. That meant working within LLaMA's architecture, using a real tokenizer, and generating frames character by character across ~8,760 tokens. Same thing, but harder constraints.

Architecture

Base LLaMA (decoder-only transformer)
Parameters 3,499,641
Hidden dim 256
FFN dim 512
Layers 4
Attention heads 4
Head dim 64
Context length 8,860 tokens
Vocab 1,753 (4 special + 4 character + 1,745 frame tokens)
Precision float32
Weights size 13.4 MB (safetensors) / 13.5 MB (GGUF)

The vocabulary consists of <pad>, <bos>, <eos>, <unk>, four character tokens (\n, ., @, ), and one token per frame number ("0" through "1744"). Each frame is ~8,760 tokens. The model generates a full frame conditioned on a two-token prompt: <bos> followed by the frame number token.

Accuracy

Best checkpoint: epoch 3,380 (loss 0.000888). Evaluated on 71 frames (every 25th + last):

Metric Value
Average accuracy 99.78%
Minimum accuracy 94.19%
Perfect frames (100%) 53/71
Frames above 90% 71/71
Inference speed (7950X, GGUF) 2,800 tok/s (3s/frame)
Inference speed (7950X, PyTorch) 320 tok/s (27s/frame)

Files

File Description
model.safetensors Model weights (float32)
config.json Architecture config
tokenizer.json Full vocabulary and merges
tokenizer_config.json Tokenizer metadata
badapple.gguf GGUF for llama.cpp
inference.py Generate frames (GGUF or PyTorch)
train.py Train from scratch (standalone, no cloud deps)
test_accuracy.py Measure accuracy against ground truth
convert_gguf.py Convert safetensors to GGUF

Quickstart

inference.py

inference.py uses llama-cpp-python (GGUF) when available, falls back to pure PyTorch.

# Fast path (~3s/frame) โ€” install llama-cpp-python
pip install llama-cpp-python
python inference.py 42           # generate frame 42
python inference.py --play       # play animation in terminal

# PyTorch-only (~27s/frame)
pip install torch safetensors
python inference.py --backend torch 42

llama.cpp CLI

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && cmake -B build && cmake --build build -j && cd ..
./llama.cpp/build/bin/llama-completion -m badapple.gguf -p '<bos>42' -n 9000 --temp 0 --special --no-display-prompt -c 9100

Training from scratch

Requires badapple.txt (15.4 MB, 1,745 frames delimited by nekomark).

pip install torch safetensors
python train.py --data badapple.txt --epochs 4000

On an L4 GPU this takes around 20 hours. The cosine warm-restart schedule (T_0=200, T_mult=2) reaches sub-0.001 loss by epoch ~3,000.

Testing accuracy

python test_accuracy.py --data badapple.txt --step 25

Converting to GGUF

pip install gguf safetensors numpy
python convert_gguf.py --model-dir . --output badapple.gguf

Training details

Optimizer AdamW (lr=3e-4, weight_decay=0.01)
Schedule CosineAnnealingWarmRestarts (T_0=200 epochs, T_mult=2)
Batch size 1 (full frames, ~8,760 tokens each)
Mixed precision AMP float16 on CUDA, float32 on CPU
Gradient clipping max_norm=1.0
Epochs trained 3,380
Best training loss 0.000852
Hardware NVIDIA L4 on Modal ($0.80/hr)

Each training sample is: <bos> <frame_N> [character tokens...] <eos>. The model learns next-token prediction over the full sequence. Padding tokens (id 0) are excluded from the loss.

How it works

The ASCII dataset encodes each of the 1,745 frames from the Bad Apple!! shadow art video as a 58-row, ~150-column grid using . (background) and @ (foreground). Each frame token acts as a content-addressable key: the model maps the two-token prefix <bos> <frame_N> to the full ~8,760-character frame.

At temperature 0, generation is deterministic. The model outputs the memorized frame character by character and stops at <eos>.

For provably deterministic playback via llama.cpp, the GGUF file uses GPT-2 BPE with generated merges so that multi-digit frame numbers (e.g., "1350") tokenize to a single token.

Dataset

The training data is badapple.txt from kisekied/BadAppleStringAnimation. It contains 1,745 frames of the Bad Apple!! music video converted to ASCII art. Each frame is delimited by nekomark.

License

Creative Commons 0 (CC0). The training dataset (badapple.txt) is from kisekied/BadAppleStringAnimation.

Downloads last month
1,311
GGUF
Model size
3.52M params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support