CAT-Translate-0.8b MLX q8
This repository provides MLX quantized weights (q8) converted from the original model.
Original model: cyberagent/CAT-Translate-0.8b
Quantization: MLX q8 (8-bit).
- Downloads last month
- 24
Model size
0.2B params
Tensor type
BF16
·
U32
·
Hardware compatibility
Log In
to add your hardware
8-bit
Model tree for hotchpotch/CAT-Translate-0.8b-mlx-q8
Base model
sbintuitions/sarashina2.2-0.5b
Finetuned
cyberagent/CAT-Translate-0.8b