Commit
·
0d1c20f
1
Parent(s):
78fbf1f
"Update README.md"
Browse files
README.md
CHANGED
|
@@ -50,7 +50,7 @@ The core project making use of the ggml library is the [llama.cpp](https://githu
|
|
| 50 |
|
| 51 |
There is a bunch of quantized files available. How to choose the best for you:
|
| 52 |
|
| 53 |
-
#
|
| 54 |
|
| 55 |
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
|
| 56 |
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
|
|
@@ -64,6 +64,7 @@ With a Q6_K you should find it really hard to find a quality difference to the o
|
|
| 64 |
|
| 65 |
|
| 66 |
|
|
|
|
| 67 |
# Original Model Card:
|
| 68 |
# MPT-7B-StoryWriter-65k+
|
| 69 |
|
|
@@ -270,6 +271,7 @@ Please cite this model using the following format:
|
|
| 270 |
```
|
| 271 |
|
| 272 |
***End of original Model File***
|
|
|
|
| 273 |
|
| 274 |
|
| 275 |
## Please consider to support my work
|
|
|
|
| 50 |
|
| 51 |
There is a bunch of quantized files available. How to choose the best for you:
|
| 52 |
|
| 53 |
+
# Legacy quants
|
| 54 |
|
| 55 |
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
|
| 56 |
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
|
|
|
|
| 64 |
|
| 65 |
|
| 66 |
|
| 67 |
+
---
|
| 68 |
# Original Model Card:
|
| 69 |
# MPT-7B-StoryWriter-65k+
|
| 70 |
|
|
|
|
| 271 |
```
|
| 272 |
|
| 273 |
***End of original Model File***
|
| 274 |
+
---
|
| 275 |
|
| 276 |
|
| 277 |
## Please consider to support my work
|