Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Nhatminh1234
/
ReframeBot-DPO-Llama3.1-8B
like
0
Text Generation
PEFT
Safetensors
Transformers
dpo
lora
trl
conversational
arxiv:
1910.09700
Model card
Files
Files and versions
xet
Community
Use this model
main
ReframeBot-DPO-Llama3.1-8B
272 MB
1 contributor
History:
2 commits
Nhatminh1234
Upload DPO Adapter
c6bc8ca
verified
3 months ago
.gitattributes
1.57 kB
Upload DPO Adapter
3 months ago
README.md
5.24 kB
Upload DPO Adapter
3 months ago
adapter_config.json
990 Bytes
Upload DPO Adapter
3 months ago
adapter_model.safetensors
168 MB
xet
Upload DPO Adapter
3 months ago
chat_template.jinja
4.72 kB
Upload DPO Adapter
3 months ago
optimizer.pt
86.9 MB
xet
Upload DPO Adapter
3 months ago
rng_state.pth
14.6 kB
xet
Upload DPO Adapter
3 months ago
scheduler.pt
1.47 kB
xet
Upload DPO Adapter
3 months ago
special_tokens_map.json
342 Bytes
Upload DPO Adapter
3 months ago
tokenizer.json
17.2 MB
xet
Upload DPO Adapter
3 months ago
tokenizer_config.json
52.6 kB
Upload DPO Adapter
3 months ago
trainer_state.json
25.1 kB
Upload DPO Adapter
3 months ago
training_args.bin
6.8 kB
xet
Upload DPO Adapter
3 months ago