Update README.md
Browse filesThanks for the GGUF quants, [Michael Radermacher](https://huggingface.co/mradermacher)!
README.md
CHANGED
|
@@ -17,7 +17,7 @@ tags:
|
|
| 17 |

|
| 18 |
|
| 19 |
- HF: wolfram/miqu-1-103b
|
| 20 |
-
- GGUF:
|
| 21 |
- EXL2: π
|
| 22 |
|
| 23 |
This is a 103b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).
|
|
@@ -26,6 +26,8 @@ Inspired by [Midnight-Rose-103B-v2.0.3](https://huggingface.co/sophosympatheia/M
|
|
| 26 |
|
| 27 |
Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
|
| 28 |
|
|
|
|
|
|
|
| 29 |
Also available:
|
| 30 |
|
| 31 |
- [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b) β Miqu's older, bigger twin sister; same Miqu, inflated to 120B.
|
|
|
|
| 17 |

|
| 18 |
|
| 19 |
- HF: wolfram/miqu-1-103b
|
| 20 |
+
- GGUF: [mradermacher's Q2_K | Q3_K_S | Q3_K_M | Q4_K_M | Q6_K | Q8_0](https://huggingface.co/mradermacher/miqu-1-103b-GGUF)
|
| 21 |
- EXL2: π
|
| 22 |
|
| 23 |
This is a 103b frankenmerge of [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with itself using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
| 26 |
|
| 27 |
Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
|
| 28 |
|
| 29 |
+
Thanks for the GGUF quants, [Michael Radermacher](https://huggingface.co/mradermacher)!
|
| 30 |
+
|
| 31 |
Also available:
|
| 32 |
|
| 33 |
- [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b) β Miqu's older, bigger twin sister; same Miqu, inflated to 120B.
|