Ministral-3-14B-Reasoning-2512-MLX-4bit
This is a 4-bit quantized MLX version of Ministral-3-14B-Reasoning-2512 for Apple Silicon Macs.
Known Limitations
Vision capabilities are NOT working in this MLX conversion. The model runs text-only inference successfully, but the Pixtral vision encoder does not properly process images. This appears to be a known issue with mlx-vlm's Mistral3/Pixtral support. Use this model for text-only tasks until mlx-vlm fixes Mistral3 vision support.
Model Details
| Property | Value |
|---|---|
| Original Model | mistralai/Ministral-3-14B-Reasoning-2512 |
| Parameters | 14B (13.5B LLM + 0.4B Vision) |
| Quantization | 4-bit (group size 64) |
| Size | ~7.9 GB |
| Framework | MLX |
| Context Length | 256K tokens |
| Vision Support | Not working (see above) |
What Works
- Text generation: Full reasoning capabilities with
[THINK]tags - Multilingual: 11 languages supported
- Function calling: Native tool use support
- Performance: ~45-50 tokens/sec on Apple Silicon
What Doesn't Work
- Vision/Image understanding: The Pixtral vision encoder is included but does not properly process images due to mlx-vlm compatibility issues
Requirements
- macOS 15.0+ (Sequoia)
- Apple Silicon Mac (M1/M2/M3/M4)
- 16GB+ unified memory recommended
- Python 3.10+
Installation
pip install mlx-vlm
Usage (Text-Only)
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
# Load model
model, processor = load("hunterbown/Ministral-3-14B-Reasoning-2512-MLX-4bit")
# Text inference with reasoning
prompt = apply_chat_template(
processor,
config=model.config,
prompt="Solve this step by step: What is 15% of 240?"
)
output = generate(model, processor, prompt, max_tokens=500)
print(output.text)
Performance
On Apple Silicon (M-series):
- Generation speed: ~45-50 tokens/sec
- Peak memory: ~8.5 GB
- Prompt processing: ~220 tokens/sec
Conversion Details
Converted using mlx-vlm:
python -m mlx_vlm.convert \
--hf-path mistralai/Ministral-3-14B-Reasoning-2512 \
--mlx-path ./ministral-3-14b-reasoning-4bit \
-q --q-bits 4 --q-group-size 64
Alternatives for Vision
If you need vision capabilities, consider:
- GGUF versions with llama.cpp
- Wait for mlx-vlm to fix Mistral3 vision support
License
Apache 2.0 (same as original model)
Credits
- Original model by Mistral AI
- MLX conversion using mlx-vlm
- Quantized by @hunterbown
- Downloads last month
- 153
Model tree for hunterbown/Ministral-3-14B-Reasoning-2512-MLX-4bit
Base model
mistralai/Ministral-3-14B-Base-2512
Finetuned
mistralai/Ministral-3-14B-Reasoning-2512