Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -21,6 +21,8 @@ model-index:
|
|
| 21 |
pipeline_tag: text-generation
|
| 22 |
quantized_by: legraphista
|
| 23 |
tags:
|
|
|
|
|
|
|
| 24 |
- quantized
|
| 25 |
- GGUF
|
| 26 |
- imatrix
|
|
@@ -35,7 +37,7 @@ _Llama.cpp imatrix quantization of cognitivecomputations/dolphin-2.9.1-mixtral-1
|
|
| 35 |
|
| 36 |
Original Model: [cognitivecomputations/dolphin-2.9.1-mixtral-1x22b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b)
|
| 37 |
Original dtype: `BF16` (`bfloat16`)
|
| 38 |
-
Quantized by: llama.cpp [
|
| 39 |
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
|
| 40 |
|
| 41 |
- [dolphin-2.9.1-mixtral-1x22b-IMat-GGUF](#dolphin-2-9-1-mixtral-1x22b-imat-gguf)
|
|
@@ -73,20 +75,25 @@ Link: [here](https://huggingface.co/legraphista/dolphin-2.9.1-mixtral-1x22b-IMat
|
|
| 73 |
### All Quants
|
| 74 |
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
| 75 |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
| 76 |
-
| dolphin-2.9.1-mixtral-1x22b.FP16 | F16 | - | β³ Processing | βͺ Static | -
|
| 77 |
| dolphin-2.9.1-mixtral-1x22b.BF16 | BF16 | - | β³ Processing | βͺ Static | -
|
|
|
|
|
|
|
|
|
|
| 78 |
| dolphin-2.9.1-mixtral-1x22b.Q5_K | Q5_K | - | β³ Processing | βͺ Static | -
|
| 79 |
| dolphin-2.9.1-mixtral-1x22b.Q5_K_S | Q5_K_S | - | β³ Processing | βͺ Static | -
|
|
|
|
| 80 |
| dolphin-2.9.1-mixtral-1x22b.Q4_K_S | Q4_K_S | - | β³ Processing | π’ IMatrix | -
|
| 81 |
-
| dolphin-2.9.1-mixtral-1x22b.Q3_K_L | Q3_K_L | - | β³ Processing | π’ IMatrix | -
|
| 82 |
-
| dolphin-2.9.1-mixtral-1x22b.Q3_K_S | Q3_K_S | - | β³ Processing | π’ IMatrix | -
|
| 83 |
-
| dolphin-2.9.1-mixtral-1x22b.Q2_K_S | Q2_K_S | - | β³ Processing | π’ IMatrix | -
|
| 84 |
| dolphin-2.9.1-mixtral-1x22b.IQ4_NL | IQ4_NL | - | β³ Processing | π’ IMatrix | -
|
| 85 |
| dolphin-2.9.1-mixtral-1x22b.IQ4_XS | IQ4_XS | - | β³ Processing | π’ IMatrix | -
|
|
|
|
|
|
|
|
|
|
| 86 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_M | IQ3_M | - | β³ Processing | π’ IMatrix | -
|
| 87 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_S | IQ3_S | - | β³ Processing | π’ IMatrix | -
|
| 88 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_XS | IQ3_XS | - | β³ Processing | π’ IMatrix | -
|
| 89 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ IMatrix | -
|
|
|
|
|
|
|
| 90 |
| dolphin-2.9.1-mixtral-1x22b.IQ2_M | IQ2_M | - | β³ Processing | π’ IMatrix | -
|
| 91 |
| dolphin-2.9.1-mixtral-1x22b.IQ2_S | IQ2_S | - | β³ Processing | π’ IMatrix | -
|
| 92 |
| dolphin-2.9.1-mixtral-1x22b.IQ2_XS | IQ2_XS | - | β³ Processing | π’ IMatrix | -
|
|
@@ -102,11 +109,11 @@ pip install -U "huggingface_hub[cli]"
|
|
| 102 |
```
|
| 103 |
Download the specific file you want:
|
| 104 |
```
|
| 105 |
-
huggingface-cli download legraphista/dolphin-2.9.1-mixtral-1x22b-IMat-GGUF --include "dolphin-2.9.1-mixtral-1x22b.
|
| 106 |
```
|
| 107 |
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
|
| 108 |
```
|
| 109 |
-
huggingface-cli download legraphista/dolphin-2.9.1-mixtral-1x22b-IMat-GGUF --include "dolphin-2.9.1-mixtral-1x22b.
|
| 110 |
# see FAQ for merging GGUF's
|
| 111 |
```
|
| 112 |
|
|
@@ -144,7 +151,7 @@ What about solving an 2x + 3 = 7 equation?<|im_end|>
|
|
| 144 |
|
| 145 |
### Llama.cpp
|
| 146 |
```
|
| 147 |
-
llama.cpp/main -m dolphin-2.9.1-mixtral-1x22b.
|
| 148 |
```
|
| 149 |
|
| 150 |
---
|
|
@@ -159,8 +166,8 @@ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1
|
|
| 159 |
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
|
| 160 |
- Download the appropriate zip for your system from the latest release
|
| 161 |
- Unzip the archive and you should be able to find `gguf-split`
|
| 162 |
-
2. Locate your GGUF chunks folder (ex: `dolphin-2.9.1-mixtral-1x22b.
|
| 163 |
-
3. Run `gguf-split --merge dolphin-2.9.1-mixtral-1x22b.
|
| 164 |
- Make sure to point `gguf-split` to the first chunk of the split.
|
| 165 |
|
| 166 |
---
|
|
|
|
| 21 |
pipeline_tag: text-generation
|
| 22 |
quantized_by: legraphista
|
| 23 |
tags:
|
| 24 |
+
- generated_from_trainer
|
| 25 |
+
- axolotl
|
| 26 |
- quantized
|
| 27 |
- GGUF
|
| 28 |
- imatrix
|
|
|
|
| 37 |
|
| 38 |
Original Model: [cognitivecomputations/dolphin-2.9.1-mixtral-1x22b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b)
|
| 39 |
Original dtype: `BF16` (`bfloat16`)
|
| 40 |
+
Quantized by: llama.cpp [b3024](https://github.com/ggerganov/llama.cpp/releases/tag/b3024)
|
| 41 |
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
|
| 42 |
|
| 43 |
- [dolphin-2.9.1-mixtral-1x22b-IMat-GGUF](#dolphin-2-9-1-mixtral-1x22b-imat-gguf)
|
|
|
|
| 75 |
### All Quants
|
| 76 |
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|
| 77 |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
|
|
|
|
| 78 |
| dolphin-2.9.1-mixtral-1x22b.BF16 | BF16 | - | β³ Processing | βͺ Static | -
|
| 79 |
+
| dolphin-2.9.1-mixtral-1x22b.FP16 | F16 | - | β³ Processing | βͺ Static | -
|
| 80 |
+
| dolphin-2.9.1-mixtral-1x22b.Q8_0 | Q8_0 | - | β³ Processing | βͺ Static | -
|
| 81 |
+
| dolphin-2.9.1-mixtral-1x22b.Q6_K | Q6_K | - | β³ Processing | βͺ Static | -
|
| 82 |
| dolphin-2.9.1-mixtral-1x22b.Q5_K | Q5_K | - | β³ Processing | βͺ Static | -
|
| 83 |
| dolphin-2.9.1-mixtral-1x22b.Q5_K_S | Q5_K_S | - | β³ Processing | βͺ Static | -
|
| 84 |
+
| dolphin-2.9.1-mixtral-1x22b.Q4_K | Q4_K | - | β³ Processing | π’ IMatrix | -
|
| 85 |
| dolphin-2.9.1-mixtral-1x22b.Q4_K_S | Q4_K_S | - | β³ Processing | π’ IMatrix | -
|
|
|
|
|
|
|
|
|
|
| 86 |
| dolphin-2.9.1-mixtral-1x22b.IQ4_NL | IQ4_NL | - | β³ Processing | π’ IMatrix | -
|
| 87 |
| dolphin-2.9.1-mixtral-1x22b.IQ4_XS | IQ4_XS | - | β³ Processing | π’ IMatrix | -
|
| 88 |
+
| dolphin-2.9.1-mixtral-1x22b.Q3_K | Q3_K | - | β³ Processing | π’ IMatrix | -
|
| 89 |
+
| dolphin-2.9.1-mixtral-1x22b.Q3_K_L | Q3_K_L | - | β³ Processing | π’ IMatrix | -
|
| 90 |
+
| dolphin-2.9.1-mixtral-1x22b.Q3_K_S | Q3_K_S | - | β³ Processing | π’ IMatrix | -
|
| 91 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_M | IQ3_M | - | β³ Processing | π’ IMatrix | -
|
| 92 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_S | IQ3_S | - | β³ Processing | π’ IMatrix | -
|
| 93 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_XS | IQ3_XS | - | β³ Processing | π’ IMatrix | -
|
| 94 |
| dolphin-2.9.1-mixtral-1x22b.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ IMatrix | -
|
| 95 |
+
| dolphin-2.9.1-mixtral-1x22b.Q2_K | Q2_K | - | β³ Processing | π’ IMatrix | -
|
| 96 |
+
| dolphin-2.9.1-mixtral-1x22b.Q2_K_S | Q2_K_S | - | β³ Processing | π’ IMatrix | -
|
| 97 |
| dolphin-2.9.1-mixtral-1x22b.IQ2_M | IQ2_M | - | β³ Processing | π’ IMatrix | -
|
| 98 |
| dolphin-2.9.1-mixtral-1x22b.IQ2_S | IQ2_S | - | β³ Processing | π’ IMatrix | -
|
| 99 |
| dolphin-2.9.1-mixtral-1x22b.IQ2_XS | IQ2_XS | - | β³ Processing | π’ IMatrix | -
|
|
|
|
| 109 |
```
|
| 110 |
Download the specific file you want:
|
| 111 |
```
|
| 112 |
+
huggingface-cli download legraphista/dolphin-2.9.1-mixtral-1x22b-IMat-GGUF --include "dolphin-2.9.1-mixtral-1x22b.BF16.gguf" --local-dir ./
|
| 113 |
```
|
| 114 |
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
|
| 115 |
```
|
| 116 |
+
huggingface-cli download legraphista/dolphin-2.9.1-mixtral-1x22b-IMat-GGUF --include "dolphin-2.9.1-mixtral-1x22b.BF16/*" --local-dir ./
|
| 117 |
# see FAQ for merging GGUF's
|
| 118 |
```
|
| 119 |
|
|
|
|
| 151 |
|
| 152 |
### Llama.cpp
|
| 153 |
```
|
| 154 |
+
llama.cpp/main -m dolphin-2.9.1-mixtral-1x22b.BF16.gguf --color -i -p "prompt here (according to the chat template)"
|
| 155 |
```
|
| 156 |
|
| 157 |
---
|
|
|
|
| 166 |
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
|
| 167 |
- Download the appropriate zip for your system from the latest release
|
| 168 |
- Unzip the archive and you should be able to find `gguf-split`
|
| 169 |
+
2. Locate your GGUF chunks folder (ex: `dolphin-2.9.1-mixtral-1x22b.BF16`)
|
| 170 |
+
3. Run `gguf-split --merge dolphin-2.9.1-mixtral-1x22b.BF16/dolphin-2.9.1-mixtral-1x22b.BF16-00001-of-XXXXX.gguf dolphin-2.9.1-mixtral-1x22b.BF16.gguf`
|
| 171 |
- Make sure to point `gguf-split` to the first chunk of the split.
|
| 172 |
|
| 173 |
---
|