Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
minsik-oh
/
dpo-model-sample
like
0
PEFT
Safetensors
trl
dpo
Generated from Trainer
License:
llama3.1
Model card
Files
Files and versions
xet
Community
Use this model
main
dpo-model-sample
103 MB
1 contributor
History:
2 commits
minsik-oh
minsik-oh/dpo-model-sample
2f4c3be
verified
over 1 year ago
.config
minsik-oh/dpo-model-sample
over 1 year ago
minsik-oh
minsik-oh/dpo-model-sample
over 1 year ago
sample_data
minsik-oh/dpo-model-sample
over 1 year ago
.gitattributes
1.65 kB
minsik-oh/dpo-model-sample
over 1 year ago
README.md
1.24 kB
minsik-oh/dpo-model-sample
over 1 year ago
adapter_config.json
656 Bytes
minsik-oh/dpo-model-sample
over 1 year ago
adapter_model.safetensors
13.6 MB
xet
minsik-oh/dpo-model-sample
over 1 year ago
annotations.csv
63.6 kB
minsik-oh/dpo-model-sample
over 1 year ago
special_tokens_map.json
325 Bytes
minsik-oh/dpo-model-sample
over 1 year ago
tokenizer.json
9.09 MB
minsik-oh/dpo-model-sample
over 1 year ago
tokenizer_config.json
55.4 kB
minsik-oh/dpo-model-sample
over 1 year ago
training_args.bin
5.18 kB
xet
minsik-oh/dpo-model-sample
over 1 year ago