Qwen2.5-1.5B-Instruct (MLX, 8-bit)
This repository contains an MLX-converted and 8-bit quantized version ofQwen/Qwen2.5-1.5B-Instruct.
- No fine-tuning or training was performed
- Format conversion + post-training quantization only
- Optimized for Apple Silicon and on-device usage
Usage
pip install -U mlx-lm
mlx_lm.generate \
--model Irfanuruchi/Qwen2.5-1.5B-Instruct-MLX-8bit \
--prompt "Write a helpful onboarding message for an iOS app in 3 bullet points."
Bench notes (MacBook Pro M3 Pro)
- Prompt tokens: 45
- Generation tokens: 100
- Generation speed: ~74.7 tokens/sec
- Peak memory: ~1.717 GB
Related models
- 4-bit variant (recommended default for iPhone):
https://huggingface.co/Irfanuruchi/Qwen2.5-1.5B-Instruct-MLX-4bit
- Downloads last month
- 27
Model size
0.4B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit