Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
TheBloke
/
Yarn-Mistral-7B-64k-GPTQ
like
4
Text Generation
Transformers
Safetensors
emozilla/yarn-train-tokenized-16k-mistral
English
mistral
custom_code
text-generation-inference
4-bit precision
gptq
arxiv:
2309.00071
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Yarn-Mistral-7B-64k-GPTQ
4.16 GB
1 contributor
History:
4 commits
TheBloke
Upload README.md
606db5a
about 2 years ago
.gitattributes
1.52 kB
initial commit
about 2 years ago
README.md
21.3 kB
Upload README.md
about 2 years ago
config.json
1.73 kB
GPTQ model commit
about 2 years ago
configuration_mistral.py
8.9 kB
GPTQ model commit
about 2 years ago
generation_config.json
116 Bytes
GPTQ model commit
about 2 years ago
model.safetensors
4.16 GB
xet
GPTQ model commit
about 2 years ago
modeling_mistral_yarn.py
67.3 kB
GPTQ model commit
about 2 years ago
quantize_config.json
134 Bytes
GPTQ model commit
about 2 years ago
special_tokens_map.json
145 Bytes
GPTQ model commit
about 2 years ago
tokenizer.json
1.8 MB
GPTQ model commit
about 2 years ago
tokenizer.model
493 kB
xet
GPTQ model commit
about 2 years ago
tokenizer_config.json
953 Bytes
GPTQ model commit
about 2 years ago