Commit
bde2b33
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files

Co-authored-by: pandora-s <pandora-s@users.noreply.huggingface.co>
Co-authored-by: patrickvonplaten <patrickvonplaten@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ tekken.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: vllm
3
+ language:
4
+ - en
5
+ - fr
6
+ - es
7
+ - de
8
+ - it
9
+ - pt
10
+ - nl
11
+ - zh
12
+ - ja
13
+ - ko
14
+ - ar
15
+ license: apache-2.0
16
+ inference: false
17
+ base_model:
18
+ - mistralai/Ministral-3-8B-Base-2512
19
+ extra_gated_description: >-
20
+ If you want to learn more about how we process your personal data, please read
21
+ our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
22
+ tags:
23
+ - mistral-common
24
+ ---
25
+
26
+ # Ministral 3 8B Instruct 2512 BF16
27
+
28
+ A balanced model in the Ministral 3 family, **Ministral 3 8B** is a powerful, efficient tiny language model with vision capabilities.
29
+
30
+ This model is the instruct post-trained version, fine-tuned for instruction tasks, making it ideal for chat and instruction based use cases.
31
+
32
+ The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 8B can even be deployed locally, capable of fitting in 24GB of VRAM in BF16, and less than 12GB of RAM/VRAM when quantized.
33
+
34
+ We provide a no-loss FP8 version [here](https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512), you can find other formats and quantizations in the [Ministral 3 - Additional Checkpoints](https://huggingface.co/collections/mistralai/ministral-3-additional-checkpoints) collection.
35
+
36
+ ## Key Features
37
+ Ministral 3 8B consists of two main architectural components:
38
+ - **8.4B Language Model**
39
+ - **0.4B Vision Encoder**
40
+
41
+ The Ministral 3 8B Instruct model offers the following capabilities:
42
+ - **Vision**: Enables the model to analyze images and provide insights based on visual content, in addition to text.
43
+ - **Multilingual**: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
44
+ - **System Prompt**: Maintains strong adherence and support for system prompts.
45
+ - **Agentic**: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
46
+ - **Edge-Optimized**: Delivers best-in-class performance at a small scale, deployable anywhere.
47
+ - **Apache 2.0 License**: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
48
+ - **Large Context Window**: Supports a 256k context window.
49
+
50
+ ### Use Cases
51
+ Perfect for balanced performance in local or embedded systems, combining versatility with efficiency.
52
+ - Chat interfaces in constrained environments
53
+ - Local daily-driver AI assistant
54
+ - Image/document description and understanding
55
+ - Translation and content generation
56
+ - Specialized agentic use cases
57
+ - Fine-tuning and specialization
58
+ - And more...
59
+
60
+ Bringing advanced AI capabilities to resource-constrained environments.
61
+
62
+ ## Ministral 3 Family
63
+
64
+ | Model Name | Type | Precision | Link |
65
+ |--------------------------------|--------------------|-----------|------------------------------------------------------------------------------------------|
66
+ | Ministral 3 3B Base 2512 | Base pre-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-3B-Base-2512) |
67
+ | Ministral 3 3B Instruct 2512 | Instruct post-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-3B-Instruct-2512) |
68
+ | Ministral 3 3B Reasoning 2512 | Reasoning capable | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-3B-Reasoning-2512) |
69
+ | Ministral 3 8B Base 2512 | Base pre-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-8B-Base-2512) |
70
+ | **Ministral 3 8B Instruct 2512** | **Instruct post-trained** | **BF16** | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512) |
71
+ | Ministral 3 8B Reasoning 2512 | Reasoning capable | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512) |
72
+ | Ministral 3 14B Base 2512 | Base pre-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-14B-Base-2512) |
73
+ | Ministral 3 14B Instruct 2512 | Instruct post-trained | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-14B-Instruct-2512) |
74
+ | Ministral 3 14B Reasoning 2512 | Reasoning capable | BF16 | [Hugging Face](https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512) |
75
+
76
+ Other formats available [here](https://huggingface.co/collections/mistralai/ministral-3-additional-checkpoints).
77
+
78
+ ## Benchmark Results
79
+
80
+ We compare Ministral 3 to similar sized models.
81
+
82
+ ### Reasoning
83
+
84
+ | Model | AIME25 | AIME24 | GPQA Diamond | LiveCodeBench |
85
+ |---------------------------|-------------|-------------|--------------|---------------|
86
+ | **Ministral 3 14B** | <u>0.850</u>| <u>0.898</u>| <u>0.712</u> | <u>0.646</u> |
87
+ | Qwen3-14B (Thinking) | 0.737 | 0.837 | 0.663 | 0.593 |
88
+ | | | | | |
89
+ | **Ministral 3 8B** | 0.787 | <u>0.860</u>| 0.668 | <u>0.616</u> |
90
+ | Qwen3-VL-8B-Thinking | <u>0.798</u>| <u>0.860</u>| <u>0.671</u> | 0.580 |
91
+ | | | | | |
92
+ | **Ministral 3 3B** | <u>0.721</u>| <u>0.775</u>| 0.534 | <u>0.548</u> |
93
+ | Qwen3-VL-4B-Thinking | 0.697 | 0.729 | <u>0.601</u> | 0.513 |
94
+
95
+ ### Instruct
96
+
97
+ | Model | Arena Hard | WildBench | MATH Maj@1 | MM MTBench |
98
+ |---------------------------|-------------|------------|-------------|------------------|
99
+ | **Ministral 3 14B** | <u>0.551</u>| <u>68.5</u>| <u>0.904</u>| <u>8.49</u> |
100
+ | Qwen3 14B (Non-Thinking) | 0.427 | 65.1 | 0.870 | NOT MULTIMODAL |
101
+ | Gemma3-12B-Instruct | 0.436 | 63.2 | 0.854 | 6.70 |
102
+ | | | | | |
103
+ | **Ministral 3 8B** | 0.509 | <u>66.8</u>| 0.876 | <u>8.08</u> |
104
+ | Qwen3-VL-8B-Instruct | <u>0.528</u>| 66.3 | <u>0.946</u>| 8.00 |
105
+ | | | | | |
106
+ | **Ministral 3 3B** | 0.305 | <u>56.8</u>| 0.830 | 7.83 |
107
+ | Qwen3-VL-4B-Instruct | <u>0.438</u>| <u>56.8</u>| <u>0.900</u>| <u>8.01</u> |
108
+ | Qwen3-VL-2B-Instruct | 0.163 | 42.2 | 0.786 | 6.36 |
109
+ | Gemma3-4B-Instruct | 0.318 | 49.1 | 0.759 | 5.23 |
110
+
111
+ ### Base
112
+
113
+ | Model | Multilingual MMLU | MATH CoT 2-Shot | AGIEval 5-shot | MMLU Redux 5-shot | MMLU 5-shot | TriviaQA 5-shot |
114
+ |---------------------|-------------------|-----------------|----------------|-------------------|-------------|-----------------|
115
+ | **Ministral 3 14B** | 0.742 | <u>0.676</u> | 0.648 | 0.820 | 0.794 | 0.749 |
116
+ | Qwen3 14B Base | <u>0.754</u> | 0.620 | <u>0.661</u> | <u>0.837</u> | <u>0.804</u>| 0.703 |
117
+ | Gemma 3 12B Base | 0.690 | 0.487 | 0.587 | 0.766 | 0.745 | <u>0.788</u> |
118
+ | | | | | | | |
119
+ | **Ministral 3 8B** | <u>0.706</u> | <u>0.626</u> | 0.591 | 0.793 | <u>0.761</u>| <u>0.681</u> |
120
+ | Qwen 3 8B Base | 0.700 | 0.576 | <u>0.596</u> | <u>0.794</u> | 0.760 | 0.639 |
121
+ | | | | | | | |
122
+ | **Ministral 3 3B** | 0.652 | <u>0.601</u> | 0.511 | 0.735 | 0.707 | 0.592 |
123
+ | Qwen 3 4B Base | <u>0.677</u> | 0.405 | <u>0.570</u> | <u>0.759</u> | <u>0.713</u>| 0.530 |
124
+ | Gemma 3 4B Base | 0.516 | 0.294 | 0.430 | 0.626 | 0.589 | <u>0.640</u> |
125
+
126
+ ## License
127
+
128
+ This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.txt).
129
+
130
+ *You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.*
SYSTEM_PROMPT.txt ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are Ministral-3-8B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
2
+ You power an AI assistant called Le Chat.
3
+ Your knowledge base was last updated on 2023-10-01.
4
+ The current date is {today}.
5
+
6
+ When you're not sure about some information or when the user's request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don't have the information and avoid making up anything.
7
+ If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").
8
+ You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.
9
+ You follow these instructions in all languages, and always respond to the user in the language they use or request.
10
+ Next sections describe the capabilities that you have.
11
+
12
+ # WEB BROWSING INSTRUCTIONS
13
+
14
+ You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.
15
+
16
+ # MULTI-MODAL INSTRUCTIONS
17
+
18
+ You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.
19
+ You cannot read nor transcribe audio files or videos.
20
+
21
+ # TOOL CALLING INSTRUCTIONS
22
+
23
+ You may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:
24
+
25
+ 1. When the request requires up-to-date information.
26
+ 2. When the request requires specific data that you do not have in your knowledge base.
27
+ 3. When the request involves actions that you cannot perform without tools.
28
+
29
+ Always prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.
chat_template.jinja ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {#- Default system message if no system prompt is passed. #}
2
+ {%- set default_system_message = 'You are Ministral-3-8B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is {today}.\n\nWhen you\'re not sure about some information or when the user\'s request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don\'t have the information and avoid making up anything.\nIf the user\'s question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").\nYou are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.\nYou follow these instructions in all languages, and always respond to the user in the language they use or request.\nNext sections describe the capabilities that you have.\n\n# WEB BROWSING INSTRUCTIONS\n\nYou cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.\n\n# MULTI-MODAL INSTRUCTIONS\n\nYou have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.\nYou cannot read nor transcribe audio files or videos.\n\n# TOOL CALLING INSTRUCTIONS\n\nYou may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:\n\n1. When the request requires up-to-date information.\n2. When the request requires specific data that you do not have in your knowledge base.\n3. When the request involves actions that you cannot perform without tools.\n\nAlways prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.' %}
3
+
4
+ {#- Begin of sequence token. #}
5
+ {{- bos_token }}
6
+
7
+ {#- Handle system prompt if it exists. #}
8
+ {#- System prompt supports text content or text chunks. #}
9
+ {%- if messages[0]['role'] == 'system' %}
10
+ {{- '[SYSTEM_PROMPT]' -}}
11
+ {%- if messages[0]['content'] is string %}
12
+ {{- messages[0]['content'] -}}
13
+ {%- else %}
14
+ {%- for block in messages[0]['content'] %}
15
+ {%- if block['type'] == 'text' %}
16
+ {{- block['text'] }}
17
+ {%- else %}
18
+ {{- raise_exception('Only text chunks are supported in system message contents.') }}
19
+ {%- endif %}
20
+ {%- endfor %}
21
+ {%- endif %}
22
+ {{- '[/SYSTEM_PROMPT]' -}}
23
+ {%- set loop_messages = messages[1:] %}
24
+ {%- else %}
25
+ {%- set loop_messages = messages %}
26
+ {%- if default_system_message != '' %}
27
+ {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }}
28
+ {%- endif %}
29
+ {%- endif %}
30
+
31
+
32
+ {#- Tools definition #}
33
+ {%- set tools_definition = '' %}
34
+ {%- set has_tools = false %}
35
+ {%- if tools is defined and tools is not none and tools|length > 0 %}
36
+ {%- set has_tools = true %}
37
+ {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %}
38
+ {{- tools_definition }}
39
+ {%- endif %}
40
+
41
+ {#- Checks for alternating user/assistant messages. #}
42
+ {%- set ns = namespace(index=0) %}
43
+ {%- for message in loop_messages %}
44
+ {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %}
45
+ {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %}
46
+ {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }}
47
+ {%- endif %}
48
+ {%- set ns.index = ns.index + 1 %}
49
+ {%- endif %}
50
+ {%- endfor %}
51
+
52
+ {#- Handle conversation messages. #}
53
+ {%- for message in loop_messages %}
54
+
55
+ {#- User messages supports text content or text and image chunks. #}
56
+ {%- if message['role'] == 'user' %}
57
+ {%- if message['content'] is string %}
58
+ {{- '[INST]' + message['content'] + '[/INST]' }}
59
+ {%- elif message['content'] | length > 0 %}
60
+ {{- '[INST]' }}
61
+ {%- if message['content'] | length == 2 %}
62
+ {%- set blocks = message['content'] | sort(attribute='type') %}
63
+ {%- else %}
64
+ {%- set blocks = message['content'] %}
65
+ {%- endif %}
66
+ {%- for block in blocks %}
67
+ {%- if block['type'] == 'text' %}
68
+ {{- block['text'] }}
69
+ {%- elif block['type'] in ['image', 'image_url'] %}
70
+ {{- '[IMG]' }}
71
+ {%- else %}
72
+ {{- raise_exception('Only text, image and image_url chunks are supported in user message content.') }}
73
+ {%- endif %}
74
+ {%- endfor %}
75
+ {{- '[/INST]' }}
76
+ {%- else %}
77
+ {{- raise_exception('User message must have a string or a list of chunks in content') }}
78
+ {%- endif %}
79
+
80
+ {#- Assistant messages supports text content or text and image chunks. #}
81
+ {%- elif message['role'] == 'assistant' %}
82
+ {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %}
83
+ {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }}
84
+ {%- endif %}
85
+
86
+ {%- if message['content'] is string %}
87
+ {{- message['content'] }}
88
+ {%- elif message['content'] | length > 0 %}
89
+ {%- for block in message['content'] %}
90
+ {%- if block['type'] == 'text' %}
91
+ {{- block['text'] }}
92
+ {%- else %}
93
+ {{- raise_exception('Only text chunks are supported in assistant message contents.') }}
94
+ {%- endif %}
95
+ {%- endfor %}
96
+ {%- endif %}
97
+
98
+ {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %}
99
+ {%- for tool in message['tool_calls'] %}
100
+ {%- set arguments = tool['function']['arguments'] %}
101
+ {%- if arguments is not string %}
102
+ {%- set arguments = arguments|tojson|safe %}
103
+ {%- elif arguments == '' %}
104
+ {%- set arguments = '{}' %}
105
+ {%- endif %}
106
+ {{- '[TOOL_CALLS]' + tool['function']['name'] + '[ARGS]' + arguments }}
107
+ {%- endfor %}
108
+ {%- endif %}
109
+
110
+ {#- End of sequence token for each assistant messages. #}
111
+ {{- eos_token }}
112
+
113
+ {#- Tool messages only supports text content. #}
114
+ {%- elif message['role'] == 'tool' %}
115
+ {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }}
116
+
117
+ {#- Raise exception for unsupported roles. #}
118
+ {%- else %}
119
+ {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role']) }}
120
+ {%- endif %}
121
+ {%- endfor %}
config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Mistral3ForConditionalGeneration"
4
+ ],
5
+ "dtype": "bfloat16",
6
+ "image_token_index": 10,
7
+ "model_type": "mistral3",
8
+ "multimodal_projector_bias": false,
9
+ "projector_hidden_act": "gelu",
10
+ "spatial_merge_size": 2,
11
+ "tie_word_embeddings": false,
12
+ "text_config": {
13
+ "attention_dropout": 0.0,
14
+ "head_dim": 128,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 4096,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 14336,
19
+ "max_position_embeddings": 262144,
20
+ "model_type": "ministral3",
21
+ "num_attention_heads": 32,
22
+ "num_hidden_layers": 34,
23
+ "num_key_value_heads": 8,
24
+ "rms_norm_eps": 1e-05,
25
+ "rope_parameters": {
26
+ "beta_fast": 32.0,
27
+ "beta_slow": 1.0,
28
+ "factor": 16.0,
29
+ "llama_4_scaling_beta": 0.1,
30
+ "mscale": 1.0,
31
+ "mscale_all_dim": 1.0,
32
+ "original_max_position_embeddings": 16384,
33
+ "rope_theta": 1000000.0,
34
+ "rope_type": "yarn",
35
+ "type": "yarn"
36
+ },
37
+ "sliding_window": null,
38
+ "use_cache": true,
39
+ "vocab_size": 131072
40
+ },
41
+ "transformers_version": "5.0.0.dev0",
42
+ "vision_config": {
43
+ "attention_dropout": 0.0,
44
+ "head_dim": 64,
45
+ "hidden_act": "silu",
46
+ "hidden_size": 1024,
47
+ "image_size": 1540,
48
+ "initializer_range": 0.02,
49
+ "intermediate_size": 4096,
50
+ "model_type": "pixtral",
51
+ "num_attention_heads": 16,
52
+ "num_channels": 3,
53
+ "num_hidden_layers": 24,
54
+ "patch_size": 14,
55
+ "rope_parameters": {
56
+ "rope_theta": 10000.0,
57
+ "rope_type": "default"
58
+ },
59
+ "rope_theta": 10000.0
60
+ },
61
+ "vision_feature_layer": -1
62
+ }
consolidated.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44db184fe2b0b08edea6370ad8403d7b17d604d38127133f50a3da111a874eca
3
+ size 17836115976
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 1,
3
+ "eos_token_id": 2,
4
+ "max_length": 262144,
5
+ "pad_token_id": 11,
6
+ "transformers_version": "5.0.0.dev0"
7
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95f4da19c81e6a06d4f0c61cac3dfdd85ef463a7955473dc4c07e570fd2342f9
3
+ size 4984292952
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18275e4c8413d0ed4b0cb8380fe1fd4faeee34189960fdd92ef5b0d3f4ed1b98
3
+ size 4999804256
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:980d1d3635ae29a6e629e2f7ad589c882eba270c3f66da74da4f8ccb11845687
3
+ size 4915917680
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa80e52af465c5644b18b623eeeebb1d50e47dafbee73816bf26f2473f88d369
3
+ size 2936108304
model.safetensors.index.json ADDED
@@ -0,0 +1,539 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_parameters": 8918026240,
4
+ "total_size": 17836052480
5
+ },
6
+ "weight_map": {
7
+ "language_model.lm_head.weight": "model-00004-of-00004.safetensors",
8
+ "language_model.model.embed_tokens.weight": "model-00001-of-00004.safetensors",
9
+ "language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
10
+ "language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
11
+ "language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
12
+ "language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
13
+ "language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
14
+ "language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
15
+ "language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
16
+ "language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
17
+ "language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
18
+ "language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
19
+ "language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
20
+ "language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
21
+ "language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
22
+ "language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
23
+ "language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
24
+ "language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
25
+ "language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
26
+ "language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
27
+ "language_model.model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
28
+ "language_model.model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
29
+ "language_model.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
30
+ "language_model.model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
31
+ "language_model.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
32
+ "language_model.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
33
+ "language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
34
+ "language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
35
+ "language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
36
+ "language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
37
+ "language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
38
+ "language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
39
+ "language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
40
+ "language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
41
+ "language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
42
+ "language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
43
+ "language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
44
+ "language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
45
+ "language_model.model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
46
+ "language_model.model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
47
+ "language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
48
+ "language_model.model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
49
+ "language_model.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
50
+ "language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
51
+ "language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
52
+ "language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
53
+ "language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
54
+ "language_model.model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
55
+ "language_model.model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
56
+ "language_model.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
57
+ "language_model.model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
58
+ "language_model.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
59
+ "language_model.model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
60
+ "language_model.model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
61
+ "language_model.model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
62
+ "language_model.model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
63
+ "language_model.model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
64
+ "language_model.model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
65
+ "language_model.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
66
+ "language_model.model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
67
+ "language_model.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
68
+ "language_model.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
69
+ "language_model.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
70
+ "language_model.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
71
+ "language_model.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
72
+ "language_model.model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
73
+ "language_model.model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
74
+ "language_model.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
75
+ "language_model.model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
76
+ "language_model.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
77
+ "language_model.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
78
+ "language_model.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
79
+ "language_model.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
80
+ "language_model.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
81
+ "language_model.model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
82
+ "language_model.model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
83
+ "language_model.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
84
+ "language_model.model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
85
+ "language_model.model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
86
+ "language_model.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
87
+ "language_model.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
88
+ "language_model.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
89
+ "language_model.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
90
+ "language_model.model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
91
+ "language_model.model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
92
+ "language_model.model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
93
+ "language_model.model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
94
+ "language_model.model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
95
+ "language_model.model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
96
+ "language_model.model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
97
+ "language_model.model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
98
+ "language_model.model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
99
+ "language_model.model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
100
+ "language_model.model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
101
+ "language_model.model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
102
+ "language_model.model.layers.18.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
103
+ "language_model.model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
104
+ "language_model.model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
105
+ "language_model.model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
106
+ "language_model.model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
107
+ "language_model.model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
108
+ "language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
109
+ "language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
110
+ "language_model.model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
111
+ "language_model.model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
112
+ "language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
113
+ "language_model.model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
114
+ "language_model.model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
115
+ "language_model.model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
116
+ "language_model.model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
117
+ "language_model.model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
118
+ "language_model.model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
119
+ "language_model.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
120
+ "language_model.model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
121
+ "language_model.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
122
+ "language_model.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
123
+ "language_model.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
124
+ "language_model.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
125
+ "language_model.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
126
+ "language_model.model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
127
+ "language_model.model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
128
+ "language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
129
+ "language_model.model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
130
+ "language_model.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
131
+ "language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
132
+ "language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
133
+ "language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
134
+ "language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
135
+ "language_model.model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
136
+ "language_model.model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
137
+ "language_model.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
138
+ "language_model.model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
139
+ "language_model.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
140
+ "language_model.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
141
+ "language_model.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
142
+ "language_model.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
143
+ "language_model.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
144
+ "language_model.model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
145
+ "language_model.model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
146
+ "language_model.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
147
+ "language_model.model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
148
+ "language_model.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
149
+ "language_model.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
150
+ "language_model.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
151
+ "language_model.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
152
+ "language_model.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
153
+ "language_model.model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
154
+ "language_model.model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
155
+ "language_model.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
156
+ "language_model.model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
157
+ "language_model.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
158
+ "language_model.model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
159
+ "language_model.model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
160
+ "language_model.model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
161
+ "language_model.model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
162
+ "language_model.model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
163
+ "language_model.model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
164
+ "language_model.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
165
+ "language_model.model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
166
+ "language_model.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
167
+ "language_model.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
168
+ "language_model.model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
169
+ "language_model.model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
170
+ "language_model.model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
171
+ "language_model.model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
172
+ "language_model.model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
173
+ "language_model.model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
174
+ "language_model.model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
175
+ "language_model.model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
176
+ "language_model.model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
177
+ "language_model.model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
178
+ "language_model.model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
179
+ "language_model.model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
180
+ "language_model.model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
181
+ "language_model.model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
182
+ "language_model.model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
183
+ "language_model.model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
184
+ "language_model.model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
185
+ "language_model.model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
186
+ "language_model.model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
187
+ "language_model.model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
188
+ "language_model.model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
189
+ "language_model.model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
190
+ "language_model.model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
191
+ "language_model.model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
192
+ "language_model.model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
193
+ "language_model.model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
194
+ "language_model.model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
195
+ "language_model.model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
196
+ "language_model.model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
197
+ "language_model.model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
198
+ "language_model.model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
199
+ "language_model.model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
200
+ "language_model.model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
201
+ "language_model.model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
202
+ "language_model.model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
203
+ "language_model.model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
204
+ "language_model.model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
205
+ "language_model.model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
206
+ "language_model.model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
207
+ "language_model.model.layers.29.input_layernorm.weight": "model-00004-of-00004.safetensors",
208
+ "language_model.model.layers.29.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
209
+ "language_model.model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
210
+ "language_model.model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
211
+ "language_model.model.layers.29.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
212
+ "language_model.model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
213
+ "language_model.model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
214
+ "language_model.model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
215
+ "language_model.model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
216
+ "language_model.model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
217
+ "language_model.model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
218
+ "language_model.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
219
+ "language_model.model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
220
+ "language_model.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
221
+ "language_model.model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
222
+ "language_model.model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
223
+ "language_model.model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
224
+ "language_model.model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
225
+ "language_model.model.layers.30.input_layernorm.weight": "model-00004-of-00004.safetensors",
226
+ "language_model.model.layers.30.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
227
+ "language_model.model.layers.30.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
228
+ "language_model.model.layers.30.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
229
+ "language_model.model.layers.30.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
230
+ "language_model.model.layers.30.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
231
+ "language_model.model.layers.30.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
232
+ "language_model.model.layers.30.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
233
+ "language_model.model.layers.30.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
234
+ "language_model.model.layers.31.input_layernorm.weight": "model-00004-of-00004.safetensors",
235
+ "language_model.model.layers.31.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
236
+ "language_model.model.layers.31.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
237
+ "language_model.model.layers.31.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
238
+ "language_model.model.layers.31.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
239
+ "language_model.model.layers.31.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
240
+ "language_model.model.layers.31.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
241
+ "language_model.model.layers.31.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
242
+ "language_model.model.layers.31.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
243
+ "language_model.model.layers.32.input_layernorm.weight": "model-00004-of-00004.safetensors",
244
+ "language_model.model.layers.32.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
245
+ "language_model.model.layers.32.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
246
+ "language_model.model.layers.32.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
247
+ "language_model.model.layers.32.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
248
+ "language_model.model.layers.32.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
249
+ "language_model.model.layers.32.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
250
+ "language_model.model.layers.32.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
251
+ "language_model.model.layers.32.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
252
+ "language_model.model.layers.33.input_layernorm.weight": "model-00004-of-00004.safetensors",
253
+ "language_model.model.layers.33.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
254
+ "language_model.model.layers.33.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
255
+ "language_model.model.layers.33.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
256
+ "language_model.model.layers.33.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
257
+ "language_model.model.layers.33.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
258
+ "language_model.model.layers.33.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
259
+ "language_model.model.layers.33.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
260
+ "language_model.model.layers.33.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
261
+ "language_model.model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
262
+ "language_model.model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
263
+ "language_model.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
264
+ "language_model.model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
265
+ "language_model.model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
266
+ "language_model.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
267
+ "language_model.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
268
+ "language_model.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
269
+ "language_model.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
270
+ "language_model.model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
271
+ "language_model.model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
272
+ "language_model.model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
273
+ "language_model.model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
274
+ "language_model.model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
275
+ "language_model.model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
276
+ "language_model.model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
277
+ "language_model.model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
278
+ "language_model.model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
279
+ "language_model.model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
280
+ "language_model.model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
281
+ "language_model.model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
282
+ "language_model.model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
283
+ "language_model.model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
284
+ "language_model.model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
285
+ "language_model.model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
286
+ "language_model.model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
287
+ "language_model.model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
288
+ "language_model.model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
289
+ "language_model.model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
290
+ "language_model.model.layers.7.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
291
+ "language_model.model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
292
+ "language_model.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
293
+ "language_model.model.layers.7.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
294
+ "language_model.model.layers.7.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
295
+ "language_model.model.layers.7.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
296
+ "language_model.model.layers.7.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
297
+ "language_model.model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
298
+ "language_model.model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
299
+ "language_model.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
300
+ "language_model.model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
301
+ "language_model.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
302
+ "language_model.model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
303
+ "language_model.model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
304
+ "language_model.model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
305
+ "language_model.model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
306
+ "language_model.model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
307
+ "language_model.model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
308
+ "language_model.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
309
+ "language_model.model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
310
+ "language_model.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
311
+ "language_model.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
312
+ "language_model.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
313
+ "language_model.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
314
+ "language_model.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
315
+ "language_model.model.norm.weight": "model-00004-of-00004.safetensors",
316
+ "multi_modal_projector.linear_1.weight": "model-00001-of-00004.safetensors",
317
+ "multi_modal_projector.linear_2.weight": "model-00001-of-00004.safetensors",
318
+ "multi_modal_projector.norm.weight": "model-00001-of-00004.safetensors",
319
+ "multi_modal_projector.patch_merger.merging_layer.weight": "model-00001-of-00004.safetensors",
320
+ "vision_tower.ln_pre.weight": "model-00001-of-00004.safetensors",
321
+ "vision_tower.patch_conv.weight": "model-00001-of-00004.safetensors",
322
+ "vision_tower.transformer.layers.0.attention.k_proj.weight": "model-00001-of-00004.safetensors",
323
+ "vision_tower.transformer.layers.0.attention.o_proj.weight": "model-00001-of-00004.safetensors",
324
+ "vision_tower.transformer.layers.0.attention.q_proj.weight": "model-00001-of-00004.safetensors",
325
+ "vision_tower.transformer.layers.0.attention.v_proj.weight": "model-00001-of-00004.safetensors",
326
+ "vision_tower.transformer.layers.0.attention_norm.weight": "model-00001-of-00004.safetensors",
327
+ "vision_tower.transformer.layers.0.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
328
+ "vision_tower.transformer.layers.0.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
329
+ "vision_tower.transformer.layers.0.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
330
+ "vision_tower.transformer.layers.0.ffn_norm.weight": "model-00001-of-00004.safetensors",
331
+ "vision_tower.transformer.layers.1.attention.k_proj.weight": "model-00001-of-00004.safetensors",
332
+ "vision_tower.transformer.layers.1.attention.o_proj.weight": "model-00001-of-00004.safetensors",
333
+ "vision_tower.transformer.layers.1.attention.q_proj.weight": "model-00001-of-00004.safetensors",
334
+ "vision_tower.transformer.layers.1.attention.v_proj.weight": "model-00001-of-00004.safetensors",
335
+ "vision_tower.transformer.layers.1.attention_norm.weight": "model-00001-of-00004.safetensors",
336
+ "vision_tower.transformer.layers.1.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
337
+ "vision_tower.transformer.layers.1.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
338
+ "vision_tower.transformer.layers.1.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
339
+ "vision_tower.transformer.layers.1.ffn_norm.weight": "model-00001-of-00004.safetensors",
340
+ "vision_tower.transformer.layers.10.attention.k_proj.weight": "model-00001-of-00004.safetensors",
341
+ "vision_tower.transformer.layers.10.attention.o_proj.weight": "model-00001-of-00004.safetensors",
342
+ "vision_tower.transformer.layers.10.attention.q_proj.weight": "model-00001-of-00004.safetensors",
343
+ "vision_tower.transformer.layers.10.attention.v_proj.weight": "model-00001-of-00004.safetensors",
344
+ "vision_tower.transformer.layers.10.attention_norm.weight": "model-00001-of-00004.safetensors",
345
+ "vision_tower.transformer.layers.10.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
346
+ "vision_tower.transformer.layers.10.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
347
+ "vision_tower.transformer.layers.10.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
348
+ "vision_tower.transformer.layers.10.ffn_norm.weight": "model-00001-of-00004.safetensors",
349
+ "vision_tower.transformer.layers.11.attention.k_proj.weight": "model-00001-of-00004.safetensors",
350
+ "vision_tower.transformer.layers.11.attention.o_proj.weight": "model-00001-of-00004.safetensors",
351
+ "vision_tower.transformer.layers.11.attention.q_proj.weight": "model-00001-of-00004.safetensors",
352
+ "vision_tower.transformer.layers.11.attention.v_proj.weight": "model-00001-of-00004.safetensors",
353
+ "vision_tower.transformer.layers.11.attention_norm.weight": "model-00001-of-00004.safetensors",
354
+ "vision_tower.transformer.layers.11.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
355
+ "vision_tower.transformer.layers.11.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
356
+ "vision_tower.transformer.layers.11.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
357
+ "vision_tower.transformer.layers.11.ffn_norm.weight": "model-00001-of-00004.safetensors",
358
+ "vision_tower.transformer.layers.12.attention.k_proj.weight": "model-00001-of-00004.safetensors",
359
+ "vision_tower.transformer.layers.12.attention.o_proj.weight": "model-00001-of-00004.safetensors",
360
+ "vision_tower.transformer.layers.12.attention.q_proj.weight": "model-00001-of-00004.safetensors",
361
+ "vision_tower.transformer.layers.12.attention.v_proj.weight": "model-00001-of-00004.safetensors",
362
+ "vision_tower.transformer.layers.12.attention_norm.weight": "model-00001-of-00004.safetensors",
363
+ "vision_tower.transformer.layers.12.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
364
+ "vision_tower.transformer.layers.12.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
365
+ "vision_tower.transformer.layers.12.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
366
+ "vision_tower.transformer.layers.12.ffn_norm.weight": "model-00001-of-00004.safetensors",
367
+ "vision_tower.transformer.layers.13.attention.k_proj.weight": "model-00001-of-00004.safetensors",
368
+ "vision_tower.transformer.layers.13.attention.o_proj.weight": "model-00001-of-00004.safetensors",
369
+ "vision_tower.transformer.layers.13.attention.q_proj.weight": "model-00001-of-00004.safetensors",
370
+ "vision_tower.transformer.layers.13.attention.v_proj.weight": "model-00001-of-00004.safetensors",
371
+ "vision_tower.transformer.layers.13.attention_norm.weight": "model-00001-of-00004.safetensors",
372
+ "vision_tower.transformer.layers.13.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
373
+ "vision_tower.transformer.layers.13.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
374
+ "vision_tower.transformer.layers.13.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
375
+ "vision_tower.transformer.layers.13.ffn_norm.weight": "model-00001-of-00004.safetensors",
376
+ "vision_tower.transformer.layers.14.attention.k_proj.weight": "model-00001-of-00004.safetensors",
377
+ "vision_tower.transformer.layers.14.attention.o_proj.weight": "model-00001-of-00004.safetensors",
378
+ "vision_tower.transformer.layers.14.attention.q_proj.weight": "model-00001-of-00004.safetensors",
379
+ "vision_tower.transformer.layers.14.attention.v_proj.weight": "model-00001-of-00004.safetensors",
380
+ "vision_tower.transformer.layers.14.attention_norm.weight": "model-00001-of-00004.safetensors",
381
+ "vision_tower.transformer.layers.14.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
382
+ "vision_tower.transformer.layers.14.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
383
+ "vision_tower.transformer.layers.14.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
384
+ "vision_tower.transformer.layers.14.ffn_norm.weight": "model-00001-of-00004.safetensors",
385
+ "vision_tower.transformer.layers.15.attention.k_proj.weight": "model-00001-of-00004.safetensors",
386
+ "vision_tower.transformer.layers.15.attention.o_proj.weight": "model-00001-of-00004.safetensors",
387
+ "vision_tower.transformer.layers.15.attention.q_proj.weight": "model-00001-of-00004.safetensors",
388
+ "vision_tower.transformer.layers.15.attention.v_proj.weight": "model-00001-of-00004.safetensors",
389
+ "vision_tower.transformer.layers.15.attention_norm.weight": "model-00001-of-00004.safetensors",
390
+ "vision_tower.transformer.layers.15.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
391
+ "vision_tower.transformer.layers.15.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
392
+ "vision_tower.transformer.layers.15.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
393
+ "vision_tower.transformer.layers.15.ffn_norm.weight": "model-00001-of-00004.safetensors",
394
+ "vision_tower.transformer.layers.16.attention.k_proj.weight": "model-00001-of-00004.safetensors",
395
+ "vision_tower.transformer.layers.16.attention.o_proj.weight": "model-00001-of-00004.safetensors",
396
+ "vision_tower.transformer.layers.16.attention.q_proj.weight": "model-00001-of-00004.safetensors",
397
+ "vision_tower.transformer.layers.16.attention.v_proj.weight": "model-00001-of-00004.safetensors",
398
+ "vision_tower.transformer.layers.16.attention_norm.weight": "model-00001-of-00004.safetensors",
399
+ "vision_tower.transformer.layers.16.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
400
+ "vision_tower.transformer.layers.16.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
401
+ "vision_tower.transformer.layers.16.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
402
+ "vision_tower.transformer.layers.16.ffn_norm.weight": "model-00001-of-00004.safetensors",
403
+ "vision_tower.transformer.layers.17.attention.k_proj.weight": "model-00001-of-00004.safetensors",
404
+ "vision_tower.transformer.layers.17.attention.o_proj.weight": "model-00001-of-00004.safetensors",
405
+ "vision_tower.transformer.layers.17.attention.q_proj.weight": "model-00001-of-00004.safetensors",
406
+ "vision_tower.transformer.layers.17.attention.v_proj.weight": "model-00001-of-00004.safetensors",
407
+ "vision_tower.transformer.layers.17.attention_norm.weight": "model-00001-of-00004.safetensors",
408
+ "vision_tower.transformer.layers.17.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
409
+ "vision_tower.transformer.layers.17.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
410
+ "vision_tower.transformer.layers.17.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
411
+ "vision_tower.transformer.layers.17.ffn_norm.weight": "model-00001-of-00004.safetensors",
412
+ "vision_tower.transformer.layers.18.attention.k_proj.weight": "model-00001-of-00004.safetensors",
413
+ "vision_tower.transformer.layers.18.attention.o_proj.weight": "model-00001-of-00004.safetensors",
414
+ "vision_tower.transformer.layers.18.attention.q_proj.weight": "model-00001-of-00004.safetensors",
415
+ "vision_tower.transformer.layers.18.attention.v_proj.weight": "model-00001-of-00004.safetensors",
416
+ "vision_tower.transformer.layers.18.attention_norm.weight": "model-00001-of-00004.safetensors",
417
+ "vision_tower.transformer.layers.18.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
418
+ "vision_tower.transformer.layers.18.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
419
+ "vision_tower.transformer.layers.18.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
420
+ "vision_tower.transformer.layers.18.ffn_norm.weight": "model-00001-of-00004.safetensors",
421
+ "vision_tower.transformer.layers.19.attention.k_proj.weight": "model-00001-of-00004.safetensors",
422
+ "vision_tower.transformer.layers.19.attention.o_proj.weight": "model-00001-of-00004.safetensors",
423
+ "vision_tower.transformer.layers.19.attention.q_proj.weight": "model-00001-of-00004.safetensors",
424
+ "vision_tower.transformer.layers.19.attention.v_proj.weight": "model-00001-of-00004.safetensors",
425
+ "vision_tower.transformer.layers.19.attention_norm.weight": "model-00001-of-00004.safetensors",
426
+ "vision_tower.transformer.layers.19.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
427
+ "vision_tower.transformer.layers.19.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
428
+ "vision_tower.transformer.layers.19.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
429
+ "vision_tower.transformer.layers.19.ffn_norm.weight": "model-00001-of-00004.safetensors",
430
+ "vision_tower.transformer.layers.2.attention.k_proj.weight": "model-00001-of-00004.safetensors",
431
+ "vision_tower.transformer.layers.2.attention.o_proj.weight": "model-00001-of-00004.safetensors",
432
+ "vision_tower.transformer.layers.2.attention.q_proj.weight": "model-00001-of-00004.safetensors",
433
+ "vision_tower.transformer.layers.2.attention.v_proj.weight": "model-00001-of-00004.safetensors",
434
+ "vision_tower.transformer.layers.2.attention_norm.weight": "model-00001-of-00004.safetensors",
435
+ "vision_tower.transformer.layers.2.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
436
+ "vision_tower.transformer.layers.2.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
437
+ "vision_tower.transformer.layers.2.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
438
+ "vision_tower.transformer.layers.2.ffn_norm.weight": "model-00001-of-00004.safetensors",
439
+ "vision_tower.transformer.layers.20.attention.k_proj.weight": "model-00001-of-00004.safetensors",
440
+ "vision_tower.transformer.layers.20.attention.o_proj.weight": "model-00001-of-00004.safetensors",
441
+ "vision_tower.transformer.layers.20.attention.q_proj.weight": "model-00001-of-00004.safetensors",
442
+ "vision_tower.transformer.layers.20.attention.v_proj.weight": "model-00001-of-00004.safetensors",
443
+ "vision_tower.transformer.layers.20.attention_norm.weight": "model-00001-of-00004.safetensors",
444
+ "vision_tower.transformer.layers.20.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
445
+ "vision_tower.transformer.layers.20.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
446
+ "vision_tower.transformer.layers.20.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
447
+ "vision_tower.transformer.layers.20.ffn_norm.weight": "model-00001-of-00004.safetensors",
448
+ "vision_tower.transformer.layers.21.attention.k_proj.weight": "model-00001-of-00004.safetensors",
449
+ "vision_tower.transformer.layers.21.attention.o_proj.weight": "model-00001-of-00004.safetensors",
450
+ "vision_tower.transformer.layers.21.attention.q_proj.weight": "model-00001-of-00004.safetensors",
451
+ "vision_tower.transformer.layers.21.attention.v_proj.weight": "model-00001-of-00004.safetensors",
452
+ "vision_tower.transformer.layers.21.attention_norm.weight": "model-00001-of-00004.safetensors",
453
+ "vision_tower.transformer.layers.21.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
454
+ "vision_tower.transformer.layers.21.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
455
+ "vision_tower.transformer.layers.21.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
456
+ "vision_tower.transformer.layers.21.ffn_norm.weight": "model-00001-of-00004.safetensors",
457
+ "vision_tower.transformer.layers.22.attention.k_proj.weight": "model-00001-of-00004.safetensors",
458
+ "vision_tower.transformer.layers.22.attention.o_proj.weight": "model-00001-of-00004.safetensors",
459
+ "vision_tower.transformer.layers.22.attention.q_proj.weight": "model-00001-of-00004.safetensors",
460
+ "vision_tower.transformer.layers.22.attention.v_proj.weight": "model-00001-of-00004.safetensors",
461
+ "vision_tower.transformer.layers.22.attention_norm.weight": "model-00001-of-00004.safetensors",
462
+ "vision_tower.transformer.layers.22.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
463
+ "vision_tower.transformer.layers.22.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
464
+ "vision_tower.transformer.layers.22.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
465
+ "vision_tower.transformer.layers.22.ffn_norm.weight": "model-00001-of-00004.safetensors",
466
+ "vision_tower.transformer.layers.23.attention.k_proj.weight": "model-00001-of-00004.safetensors",
467
+ "vision_tower.transformer.layers.23.attention.o_proj.weight": "model-00001-of-00004.safetensors",
468
+ "vision_tower.transformer.layers.23.attention.q_proj.weight": "model-00001-of-00004.safetensors",
469
+ "vision_tower.transformer.layers.23.attention.v_proj.weight": "model-00001-of-00004.safetensors",
470
+ "vision_tower.transformer.layers.23.attention_norm.weight": "model-00001-of-00004.safetensors",
471
+ "vision_tower.transformer.layers.23.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
472
+ "vision_tower.transformer.layers.23.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
473
+ "vision_tower.transformer.layers.23.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
474
+ "vision_tower.transformer.layers.23.ffn_norm.weight": "model-00001-of-00004.safetensors",
475
+ "vision_tower.transformer.layers.3.attention.k_proj.weight": "model-00001-of-00004.safetensors",
476
+ "vision_tower.transformer.layers.3.attention.o_proj.weight": "model-00001-of-00004.safetensors",
477
+ "vision_tower.transformer.layers.3.attention.q_proj.weight": "model-00001-of-00004.safetensors",
478
+ "vision_tower.transformer.layers.3.attention.v_proj.weight": "model-00001-of-00004.safetensors",
479
+ "vision_tower.transformer.layers.3.attention_norm.weight": "model-00001-of-00004.safetensors",
480
+ "vision_tower.transformer.layers.3.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
481
+ "vision_tower.transformer.layers.3.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
482
+ "vision_tower.transformer.layers.3.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
483
+ "vision_tower.transformer.layers.3.ffn_norm.weight": "model-00001-of-00004.safetensors",
484
+ "vision_tower.transformer.layers.4.attention.k_proj.weight": "model-00001-of-00004.safetensors",
485
+ "vision_tower.transformer.layers.4.attention.o_proj.weight": "model-00001-of-00004.safetensors",
486
+ "vision_tower.transformer.layers.4.attention.q_proj.weight": "model-00001-of-00004.safetensors",
487
+ "vision_tower.transformer.layers.4.attention.v_proj.weight": "model-00001-of-00004.safetensors",
488
+ "vision_tower.transformer.layers.4.attention_norm.weight": "model-00001-of-00004.safetensors",
489
+ "vision_tower.transformer.layers.4.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
490
+ "vision_tower.transformer.layers.4.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
491
+ "vision_tower.transformer.layers.4.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
492
+ "vision_tower.transformer.layers.4.ffn_norm.weight": "model-00001-of-00004.safetensors",
493
+ "vision_tower.transformer.layers.5.attention.k_proj.weight": "model-00001-of-00004.safetensors",
494
+ "vision_tower.transformer.layers.5.attention.o_proj.weight": "model-00001-of-00004.safetensors",
495
+ "vision_tower.transformer.layers.5.attention.q_proj.weight": "model-00001-of-00004.safetensors",
496
+ "vision_tower.transformer.layers.5.attention.v_proj.weight": "model-00001-of-00004.safetensors",
497
+ "vision_tower.transformer.layers.5.attention_norm.weight": "model-00001-of-00004.safetensors",
498
+ "vision_tower.transformer.layers.5.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
499
+ "vision_tower.transformer.layers.5.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
500
+ "vision_tower.transformer.layers.5.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
501
+ "vision_tower.transformer.layers.5.ffn_norm.weight": "model-00001-of-00004.safetensors",
502
+ "vision_tower.transformer.layers.6.attention.k_proj.weight": "model-00001-of-00004.safetensors",
503
+ "vision_tower.transformer.layers.6.attention.o_proj.weight": "model-00001-of-00004.safetensors",
504
+ "vision_tower.transformer.layers.6.attention.q_proj.weight": "model-00001-of-00004.safetensors",
505
+ "vision_tower.transformer.layers.6.attention.v_proj.weight": "model-00001-of-00004.safetensors",
506
+ "vision_tower.transformer.layers.6.attention_norm.weight": "model-00001-of-00004.safetensors",
507
+ "vision_tower.transformer.layers.6.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
508
+ "vision_tower.transformer.layers.6.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
509
+ "vision_tower.transformer.layers.6.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
510
+ "vision_tower.transformer.layers.6.ffn_norm.weight": "model-00001-of-00004.safetensors",
511
+ "vision_tower.transformer.layers.7.attention.k_proj.weight": "model-00001-of-00004.safetensors",
512
+ "vision_tower.transformer.layers.7.attention.o_proj.weight": "model-00001-of-00004.safetensors",
513
+ "vision_tower.transformer.layers.7.attention.q_proj.weight": "model-00001-of-00004.safetensors",
514
+ "vision_tower.transformer.layers.7.attention.v_proj.weight": "model-00001-of-00004.safetensors",
515
+ "vision_tower.transformer.layers.7.attention_norm.weight": "model-00001-of-00004.safetensors",
516
+ "vision_tower.transformer.layers.7.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
517
+ "vision_tower.transformer.layers.7.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
518
+ "vision_tower.transformer.layers.7.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
519
+ "vision_tower.transformer.layers.7.ffn_norm.weight": "model-00001-of-00004.safetensors",
520
+ "vision_tower.transformer.layers.8.attention.k_proj.weight": "model-00001-of-00004.safetensors",
521
+ "vision_tower.transformer.layers.8.attention.o_proj.weight": "model-00001-of-00004.safetensors",
522
+ "vision_tower.transformer.layers.8.attention.q_proj.weight": "model-00001-of-00004.safetensors",
523
+ "vision_tower.transformer.layers.8.attention.v_proj.weight": "model-00001-of-00004.safetensors",
524
+ "vision_tower.transformer.layers.8.attention_norm.weight": "model-00001-of-00004.safetensors",
525
+ "vision_tower.transformer.layers.8.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
526
+ "vision_tower.transformer.layers.8.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
527
+ "vision_tower.transformer.layers.8.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
528
+ "vision_tower.transformer.layers.8.ffn_norm.weight": "model-00001-of-00004.safetensors",
529
+ "vision_tower.transformer.layers.9.attention.k_proj.weight": "model-00001-of-00004.safetensors",
530
+ "vision_tower.transformer.layers.9.attention.o_proj.weight": "model-00001-of-00004.safetensors",
531
+ "vision_tower.transformer.layers.9.attention.q_proj.weight": "model-00001-of-00004.safetensors",
532
+ "vision_tower.transformer.layers.9.attention.v_proj.weight": "model-00001-of-00004.safetensors",
533
+ "vision_tower.transformer.layers.9.attention_norm.weight": "model-00001-of-00004.safetensors",
534
+ "vision_tower.transformer.layers.9.feed_forward.down_proj.weight": "model-00001-of-00004.safetensors",
535
+ "vision_tower.transformer.layers.9.feed_forward.gate_proj.weight": "model-00001-of-00004.safetensors",
536
+ "vision_tower.transformer.layers.9.feed_forward.up_proj.weight": "model-00001-of-00004.safetensors",
537
+ "vision_tower.transformer.layers.9.ffn_norm.weight": "model-00001-of-00004.safetensors"
538
+ }
539
+ }
params.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dim": 4096,
3
+ "n_layers": 34,
4
+ "head_dim": 128,
5
+ "hidden_dim": 14336,
6
+ "n_heads": 32,
7
+ "n_kv_heads": 8,
8
+ "rope_theta": 1000000.0,
9
+ "norm_eps": 1e-05,
10
+ "vocab_size": 131072,
11
+ "tied_embeddings": false,
12
+ "max_position_embeddings": 262144,
13
+ "llama_4_scaling": {
14
+ "original_max_position_embeddings": 16384,
15
+ "beta": 0.1
16
+ },
17
+ "q_lora_rank": null,
18
+ "qk_rope_head_dim": null,
19
+ "qk_nope_head_dim": null,
20
+ "kv_lora_rank": null,
21
+ "v_head_dim": null,
22
+ "yarn": {
23
+ "original_max_position_embeddings": 16384,
24
+ "factor": 16,
25
+ "apply_scale": false,
26
+ "beta": 32,
27
+ "alpha": 1
28
+ },
29
+ "vision_encoder": {
30
+ "image_token_id": 10,
31
+ "image_break_token_id": 12,
32
+ "image_end_token_id": 13,
33
+ "intermediate_size": 4096,
34
+ "num_hidden_layers": 24,
35
+ "num_attention_heads": 16,
36
+ "mm_projector_id": "patch_merge",
37
+ "spatial_merge_size": 2,
38
+ "hidden_size": 1024,
39
+ "num_channels": 3,
40
+ "image_size": 1540,
41
+ "max_image_size": 1540,
42
+ "patch_size": 14,
43
+ "rope_theta": 10000.0,
44
+ "add_pre_mm_projector_layer_norm": true,
45
+ "adapter_bias": false
46
+ }
47
+ }
processor_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_break_token": "[IMG_BREAK]",
3
+ "image_end_token": "[IMG_END]",
4
+ "image_processor": {
5
+ "crop_size": null,
6
+ "data_format": "channels_first",
7
+ "device": null,
8
+ "disable_grouping": null,
9
+ "do_center_crop": null,
10
+ "do_convert_rgb": true,
11
+ "do_normalize": true,
12
+ "do_pad": null,
13
+ "do_rescale": true,
14
+ "do_resize": true,
15
+ "image_mean": [
16
+ 0.48145466,
17
+ 0.4578275,
18
+ 0.40821073
19
+ ],
20
+ "image_processor_type": "PixtralImageProcessorFast",
21
+ "image_seq_length": null,
22
+ "image_std": [
23
+ 0.26862954,
24
+ 0.26130258,
25
+ 0.27577711
26
+ ],
27
+ "input_data_format": null,
28
+ "pad_size": null,
29
+ "patch_size": 14,
30
+ "processor_class": "PixtralProcessor",
31
+ "resample": 3,
32
+ "rescale_factor": 0.00392156862745098,
33
+ "return_tensors": null,
34
+ "size": {
35
+ "longest_edge": 1540
36
+ }
37
+ },
38
+ "image_token": "[IMG]",
39
+ "patch_size": 14,
40
+ "processor_class": "PixtralProcessor",
41
+ "spatial_merge_size": 2
42
+ }
special_tokens_map.json ADDED
The diff for this file is too large to render. See raw diff
 
tekken.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e29d19ea32eb7e26e6c0572d57cb7f9eca0f4420e0e0fe6ae1cf3be94da1c0d6
3
+ size 16753777
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:577575622324b2e099e2648be26bdeb5e5815ffe66d7004e9e3ddbf421db6bf1
3
+ size 17078110
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff