Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Starling-LM-7B-beta-GPTQ
like
4
Text Generation
Transformers
Safetensors
mistral
finetuned
quantized
4-bit precision
gptq
reward model
RLHF
RLAIF
conversational
en
berkeley-nest/Nectar
arxiv:1909.08593
has_space
text-generation-inference
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
main
Starling-LM-7B-beta-GPTQ
4.16 GB
1 contributor
History:
2 commits
MaziyarPanahi
Upload folder using huggingface_hub
a4e5694
verified
almost 2 years ago
.gitattributes
1.52 kB
initial commit
almost 2 years ago
README.md
1.68 kB
Upload folder using huggingface_hub
almost 2 years ago
added_tokens.json
53 Bytes
Upload folder using huggingface_hub
almost 2 years ago
config.json
991 Bytes
Upload folder using huggingface_hub
almost 2 years ago
model.safetensors
4.16 GB
xet
Upload folder using huggingface_hub
almost 2 years ago
quantize_config.json
267 Bytes
Upload folder using huggingface_hub
almost 2 years ago
special_tokens_map.json
650 Bytes
Upload folder using huggingface_hub
almost 2 years ago
tokenizer.json
1.8 MB
Upload folder using huggingface_hub
almost 2 years ago
tokenizer.model
493 kB
xet
Upload folder using huggingface_hub
almost 2 years ago
tokenizer_config.json
1.82 kB
Upload folder using huggingface_hub
almost 2 years ago