Database Admin-SLM: Role-Based Small Language Model
A LLaMA-style transformer (~1007.5M params, ~1.01B) trained from scratch for the Database Admin role. Supports up to 1M token context via RoPE with gradient checkpointing.
Architecture
| Component | Value |
|---|---|
| Architecture | LLaMA-style (RoPE + RMSNorm + SwiGLU) |
| Parameters | |
| Layers | 32 |
| Heads | 20 |
| Embedding | 1600 |
| Max Context | 100,000,000,000 tokens |
| Max Output | 1,000,000 tokens |
| Vocab | 13,202 BPE |
| Model Size | ~4 GB (fp32) |
Training
- Best eval loss: 6.770246982574463
- Trained with gradient checkpointing on Apple M4 (MPS)
- 3 epochs, batch_size=1, grad_accum=16
Usage
from huggingface_hub import hf_hub_download
from tokenizers import Tokenizer
model_path = hf_hub_download("sathishphdai/database-admin-slm-1m", "model.safetensors")
tokenizer_path = hf_hub_download("sathishphdai/database-admin-slm-1m", "database_admin_tokenizer.json")
tokenizer = Tokenizer.from_file(tokenizer_path)
- Downloads last month
- -