--- base_model: - Qwen/Qwen2.5-3B-Instruct library_name: transformers tags: - mergekit - merge language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details The model was merged using the **passthrough merge method**, with a **sliding window approach**. ### Duplicated Layers Due to the sliding window method, layers overlap between slices. The following layers are duplicated across multiple slices: - Layers 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32 This means each layer is used in multiple slices, with each slice overlapping with the previous and next slice. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [0, 4] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [2, 6] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [4, 8] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [6, 10] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [8, 12] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [10, 14] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [12, 16] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [14, 18] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [16, 20] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [18, 22] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [20, 24] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [22, 26] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [24, 28] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [26, 30] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [28, 32] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [30, 34] - sources: - model: Qwen/Qwen2.5-3B-Instruct layer_range: [32, 36] tokenizer_source: Qwen/Qwen2.5-3B-Instruct merge_method: passthrough dtype: bfloat16 ```