-
-
-
-
-
-
Inference Providers
Active filters:
1bit
legraphista/Qwen2.5-3B-Instruct-IMat-GGUF
Text Generation
•
3B
•
Updated
•
818
legraphista/Qwen2.5-7B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
2.02k
legraphista/Qwen2.5-14B-Instruct-IMat-GGUF
Text Generation
•
15B
•
Updated
•
907
legraphista/Qwen2.5-32B-Instruct-IMat-GGUF
Text Generation
•
33B
•
Updated
•
1.6k
legraphista/Qwen2.5-Coder-1.5B-Instruct-IMat-GGUF
Text Generation
•
2B
•
Updated
•
679
legraphista/Qwen2.5-Math-1.5B-Instruct-IMat-GGUF
Text Generation
•
2B
•
Updated
•
569
legraphista/Qwen2.5-Coder-7B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
425
legraphista/Qwen2.5-Math-7B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
1.78k
legraphista/Qwen2.5-72B-Instruct-IMat-GGUF
Text Generation
•
73B
•
Updated
•
1.74k
legraphista/Llama-3.2-1B-Instruct-IMat-GGUF
Text Generation
•
1B
•
Updated
•
986
legraphista/Llama-3.2-3B-Instruct-IMat-GGUF
Text Generation
•
3B
•
Updated
•
449
•
1
mradermacher/Bitnet-M7-70m-GGUF
77.5M
•
Updated
•
178
mradermacher/Bitnet-M7-70m-i1-GGUF
77.5M
•
Updated
•
400