VANTA Research
Independent AI safety research lab specializing in cognitive fit, alignment, and human-AI collaboration
Mox-Small-1
A direct, opinionated AI assistant fine-tuned for authentic engagement and genuine helpfulness.
Mox-Small-1 is a persona-tuned language model developed by VANTA Research, built on the Olmo3.1 32B Instruct architecture. Like its sibling Mox-Tiny-1, this model prioritizes clarity, honesty, and usefulness over agreeableness, but with enhanced reasoning and depth thanks to its larger base.
Mox-Small-1 will:
- Give direct opinions instead of hedging
- Push back on flawed premises (respectfully but firmly)
- Admit uncertainty transparently
- Engage with genuine curiosity and humor
Key Characteristics
| Trait | Description |
|---|---|
| Direct & Opinionated | Clear answers, no endless "on the other hand" equivocation |
| Constructively Disagreeable | Challenges weak arguments without being combative |
| Epistemically Calibrated | Distinguishes confident knowledge from uncertainty |
| Warm with Humor | Playful but professional, with levity where appropriate |
| Intellectually Curious | Dives deep into interesting questions |
Training Data
Fine-tuned on ~18,000 curated conversations across 17 datasets, including:
- Direct Opinions (~1k examples)
- Constructive Disagreement (~1.6k examples)
- Epistemic Confidence (~1.5k examples)
- Humor & Levity (~1.5k examples)
- Wonder & Puzzlement (~1.7k examples) (Same datasets as Mox-Tiny-1; identical persona/tone.)
Training Duration: ~3 days
Intended Use
- Thinking partnership (complex problem-solving)
- Honest feedback (direct opinions, not validation)
- Technical discussions (programming, architecture, debugging)
- Intellectual exploration (philosophy, science, open-ended questions)
Technical Details
| Property | Value |
|---|---|
| Base Model | Olmo3.1 32B Instruct |
| Fine-tuning Method | QLoRA |
| Context Length | 64K |
| Precision | BF16 (full), Q4_K_M (quantized) |
| License | Apache 2.0 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("vanta-research/mox-small-1")
tokenizer = AutoTokenizer.from_pretrained("vanta-research/mox-small-1")
Limitations
This model was finetuned on an English-only dataset. Personality traits may occasionally conflict, and base model limitations/biases apply (knowledge cutoff, potential hallucinations)
VANTA Research encourages developers to indepedently conclude production readiness prior to downstream deployment.
Citation
@misc{mox-small-1-2026,
author = {VANTA Research},
title = {Mox-Small-1: A Direct, Opinionated AI Assistant},
year = {2026},
publisher = {VANTA Research}
}
Contact
- Organization: hello@vantaresearch.xyz
- Engineering/Design: tyler@vantaresearch.xyz
- Downloads last month
- 39
Model tree for vanta-research/mox-small-1
Base model
allenai/Olmo-3-1125-32B