Update README.md
Browse files
README.md
CHANGED
|
@@ -113,9 +113,10 @@ MixTAO-7Bx2-MoE is a Mixure of Experts (MoE).
|
|
| 113 |
This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.
|
| 114 |
|
| 115 |
### 🦒 Colab
|
| 116 |
-
|
|
| 117 |
| --- | --- |
|
| 118 |
-
|[](https://colab.research.google.com/drive/1y2XmAGrQvVfbgtimTsCBO3tem735q7HZ?usp=sharing) | MixTAO-7Bx2-MoE-v8.1 |
|
|
|
|
| 119 |
|
| 120 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 121 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zhengr__MixTAO-7Bx2-MoE-v8.1)
|
|
|
|
| 113 |
This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.
|
| 114 |
|
| 115 |
### 🦒 Colab
|
| 116 |
+
| Link | Info - Model Name |
|
| 117 |
| --- | --- |
|
| 118 |
+
|[](https://colab.research.google.com/drive/1y2XmAGrQvVfbgtimTsCBO3tem735q7HZ?usp=sharing) | MixTAO-7Bx2-MoE-v8.1 |
|
| 119 |
+
|[mixtao-7bx2-moe-v8.1.Q4_K_M.gguf](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF/resolve/main/mixtao-7bx2-moe-v8.1.Q4_K_M.gguf) | GGUF of MixTAO-7Bx2-MoE-v8.1 <br> Only Q4_K_M in https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF |
|
| 120 |
|
| 121 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 122 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zhengr__MixTAO-7Bx2-MoE-v8.1)
|