--- inference: false base_model: Qwen/Qwen3-Next-80B-A3B-Instruct base_model_relation: quantized tags: - exl3 library_name: exllamav3 pipeline_tag: text-generation --- Some pointlessly bigger exllamav3 quants of [Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct) to complement [Turboderp's optimized quants](https://huggingface.co/turboderp/Qwen3-Next-80B-A3B-Instruct-exl3). [6.00bpw_H6](https://huggingface.co/MikeRoz/Qwen3-Next-80B-A3B-Instruct-exl3/tree/6.00bpw_H6) (56.561 GiB) [8.00bpw_H8](https://huggingface.co/MikeRoz/Qwen3-Next-80B-A3B-Instruct-exl3/tree/8.00bpw_H8) (75.026 GiB)