Aya-Z GGUF Quantized Models
Technical Details
- Quantization Tool: llama.cpp
- Version: version: 5287 (90703650)
Model Information
- Base Model: matrixportal/Aya-Z
- Quantized by: matrixportal
Available Files
| ๐ Download | ๐ข Type | ๐ Description |
|---|---|---|
| Download | Q4 K M | 4-bit balanced (recommended default) |
๐ก Q4 K M provides the best balance for most use cases
- Downloads last month
- 1
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for matrixportalx/Aya-Z-GGUF
Base model
matrixportalx/Aya-X-Mod