--- base_model: - mistralai/Mistral-7B-v0.1 - OpenPipe/mistral-ft-optimized-1218 - abacusai/Slerp-CM-mist-dpo - samir-fama/SamirGPT-v1 library_name: transformers tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo) * [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: mistralai/Mistral-7B-v0.1 dtype: bfloat16 merge_method: dare_ties modules: default: slices: - sources: - layer_range: [0, 32] model: mistralai/Mistral-7B-v0.1 - layer_range: [0, 32] model: samir-fama/SamirGPT-v1 parameters: density: 0.53 weight: 0.4 - layer_range: [0, 32] model: abacusai/Slerp-CM-mist-dpo parameters: density: 0.53 weight: 0.3 - layer_range: [0, 32] model: OpenPipe/mistral-ft-optimized-1218 parameters: density: 0.53 weight: 0.3 parameters: int8_mask: 1.0 ```