inferencerlabs commited on
Commit
1af9a9f
·
verified ·
1 Parent(s): 9e7ec4f

Upload complete model

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  library_name: mlx
6
  pipeline_tag: text-generation
7
  ---
8
- **See GLM-4.7 MLX in action - [demonstration video - coming soon](https://youtube.com/xcreate)**
9
 
10
  *q6.5bit quant typically achieves 1.128 perplexity in our testing*
11
  | Quantization | Perplexity |
@@ -25,7 +25,7 @@ pipeline_tag: text-generation
25
  - Memory usage: ~265 GB
26
 
27
  ##### Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.30
28
- ##### For more details see [demonstration video - coming soon](https://youtube.com/xcreate) or visit [GLM-4.7](https://huggingface.co/zai-org/GLM-4.7).
29
 
30
  ## Disclaimer
31
 
 
5
  library_name: mlx
6
  pipeline_tag: text-generation
7
  ---
8
+ **See GLM-4.7 MLX in action - [demonstration video](https://youtu.be/E-8KJpUFalM)**
9
 
10
  *q6.5bit quant typically achieves 1.128 perplexity in our testing*
11
  | Quantization | Perplexity |
 
25
  - Memory usage: ~265 GB
26
 
27
  ##### Quantized with a modified version of [MLX](https://github.com/ml-explore/mlx) 0.30
28
+ ##### For more details see [demonstration video](https://youtu.be/E-8KJpUFalM) or visit [GLM-4.7](https://huggingface.co/zai-org/GLM-4.7).
29
 
30
  ## Disclaimer
31