Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
RedHatAI
/
granite-3.1-8b-base-quantized.w4a16
like
1
Follow
Red Hat AI
1.94k
Text Generation
Safetensors
English
granite
w4a16
int4
vllm
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
2
refs/pr/1
granite-3.1-8b-base-quantized.w4a16
4.92 GB
5 contributors
History:
14 commits
ekurtic
Update README.md
8354230
verified
11 months ago
.gitattributes
1.52 kB
initial commit
about 1 year ago
README.md
15.7 kB
Update README.md
11 months ago
config.json
13.7 kB
Upload model files
about 1 year ago
generation_config.json
132 Bytes
Upload model files
about 1 year ago
merges.txt
442 kB
Upload model files
about 1 year ago
model.safetensors
4.92 GB
xet
Upload model files
about 1 year ago
recipe.yaml
336 Bytes
Upload model files
about 1 year ago
special_tokens_map.json
1.02 kB
Upload model files
about 1 year ago
tokenizer.json
3.48 MB
Upload model files
about 1 year ago
tokenizer_config.json
4.16 kB
Upload model files
about 1 year ago
vocab.json
777 kB
Upload model files
about 1 year ago