Configurable Preference Tuning ⚙️📝
Collection
CPT uses rubric-guided synthetic data and DPO to enable LLMs to dynamically adjust behavior (e.g., writing style) at inference with system prompts • 7 items • Updated
• 1
This repository contains the CPT-tuned model described in Configurable Preference Tuning with Rubric-Guided Synthetic Data.
The training code is available at https://github.com/vicgalle/configurable-preference-tuning.