Upload complete model
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ tags:
|
|
| 5 |
library_name: mlx
|
| 6 |
pipeline_tag: text-generation
|
| 7 |
---
|
| 8 |
-
**See GLM-4.7 MLX in action - [demonstration video
|
| 9 |
|
| 10 |
*q6.5bit quant typically achieves 1.128 perplexity in our testing*
|
| 11 |
| Quantization | Perplexity |
|
|
@@ -25,7 +25,7 @@ pipeline_tag: text-generation
|
|
| 25 |
- Memory usage: ~265 GB
|
| 26 |
|
| 27 |
##### Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.30
|
| 28 |
-
##### For more details see [demonstration video
|
| 29 |
|
| 30 |
## Disclaimer
|
| 31 |
|
|
|
|
| 5 |
library_name: mlx
|
| 6 |
pipeline_tag: text-generation
|
| 7 |
---
|
| 8 |
+
**See GLM-4.7 MLX in action - [demonstration video](https://youtu.be/E-8KJpUFalM)**
|
| 9 |
|
| 10 |
*q6.5bit quant typically achieves 1.128 perplexity in our testing*
|
| 11 |
| Quantization | Perplexity |
|
|
|
|
| 25 |
- Memory usage: ~265 GB
|
| 26 |
|
| 27 |
##### Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.30
|
| 28 |
+
##### For more details see [demonstration video](https://youtu.be/E-8KJpUFalM) or visit [GLM-4.7](https://huggingface.co/zai-org/GLM-4.7).
|
| 29 |
|
| 30 |
## Disclaimer
|
| 31 |
|