--- base_model: - Novaciano/Dirty_RP-3.2-1B - Novaciano/ChatML_RP-3.2-1B - D1rtyB1rd/Dirty-Alice-RP-NSFW-llama-3.2-1B - Novaciano/Creative_RP-3.2-1B - Novaciano/Instruct_RP-3.2-1B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [D1rtyB1rd/Dirty-Alice-RP-NSFW-llama-3.2-1B](https://huggingface.co/D1rtyB1rd/Dirty-Alice-RP-NSFW-llama-3.2-1B) as a base. ### Models Merged The following models were included in the merge: * [Novaciano/Dirty_RP-3.2-1B](https://huggingface.co/Novaciano/Dirty_RP-3.2-1B) * [Novaciano/ChatML_RP-3.2-1B](https://huggingface.co/Novaciano/ChatML_RP-3.2-1B) * [Novaciano/Creative_RP-3.2-1B](https://huggingface.co/Novaciano/Creative_RP-3.2-1B) * [Novaciano/Instruct_RP-3.2-1B](https://huggingface.co/Novaciano/Instruct_RP-3.2-1B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Novaciano/Creative_RP-3.2-1B parameters: weight: 1.0 - model: Novaciano/Instruct_RP-3.2-1B parameters: weight: 1.0 - model: Novaciano/ChatML_RP-3.2-1B parameters: weight: 1.0 - model: Novaciano/Dirty_RP-3.2-1B parameters: weight: 1.0 merge_method: model_stock base_model: D1rtyB1rd/Dirty-Alice-RP-NSFW-llama-3.2-1B dtype: bfloat16 parameters: t: [0, 0.5, 1, 0.5, 0] ```