This is a custom quant of DeepSeek's R1 0528 model that has the following:
- Q8_0 for the default quantization type (attention, shared experts, etc.)
- Q4_K for the FFN_UP and FFN_GATE tensors
- Q5_K for the FFN_DOWN tensors
The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization.
This model was produced using Unsloths's imatrix.
- Downloads last month
- -
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for AesSedai/DeepSeek-R1-0528-GGUF
Base model
deepseek-ai/DeepSeek-R1-0528