humans.txt

A creative writing and roleplay model which places empasis on maintaining output diversity / creativity without compromising on quality throughout the aligment process.

This model is post-trained from Mistral Small 3 Base using all human (no synthetic) data as well as Diverse OrPO from Modifying Large Language Model Post-Training for Diverse Creative Writing

Usage

I recommend the following sampler settings as humans.txt is somewhat sensitive to samplers like most Mistral Small 3 based model:

  • Chat Template: Alpaca

  • Prefills: Not sure tbh, seems a bit touchy with prefills.

  • Temperature: 0.6

  • Top P: .95

  • Repetition Penalty: 1.08

  • Repetition Range: 4096

Downloads last month
9
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ConicCat/humans.txt-Diverse-OrPO-24B

Finetuned
(2)
this model
Quantizations
3 models

Datasets used to train ConicCat/humans.txt-Diverse-OrPO-24B