Text Generation
Transformers
Safetensors
granite
code
qiskit
conversational
ndupuis commited on
Commit
3f13c0e
·
1 Parent(s): 0a1a65d

Add granite3.2-8b-qiskit model

Browse files
README.md CHANGED
@@ -1,3 +1,80 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ inference: false
4
+ license: apache-2.0
5
+ datasets:
6
+ - public-qiskit
7
+ - synthetic-qiskit
8
+ metrics:
9
+ - code_eval
10
+ library_name: transformers
11
+ tags:
12
+ - code
13
+ - granite
14
+ - qiskit
15
+ ---
16
+
17
+ # granite-3.2-8b-qiskit
18
+
19
+ ## Model Summary
20
+
21
+ **granite-3.2-8b-qiskit** is a 8B parameter model extend pretrained and fine tuned on top of granite3.2-8b-base using Qiskit code and instruction data to improve capabilities at writing high-quality and non-deprecated Qiskit code. We used only data with the following licenses: Apache 2.0, MIT, the Unlicense, Mulan PSL Version 2, BSD-2, BSD-3, and Creative Commons Attribution 4.0.
22
+
23
+ - **Developers:** IBM Quantum & IBM Research
24
+ - **GitHub Repository:** Pending
25
+ - **Related Papers:** [Qiskit Code Assistant: Training LLMs for
26
+ generating Quantum Computing Code](https://arxiv.org/abs/2405.19495) and [Qiskit HumanEval: An Evaluation Benchmark For Quantum Code Generative Models](https://arxiv.org/abs/2406.14712)
27
+ - **Release Date**: Pending
28
+ - **License:** Pending.
29
+
30
+ ## Usage
31
+
32
+ ### Intended use
33
+
34
+ This model is designed for generating quantum computing code using Qiskit. Both quantum computing practitionners and new Qiskit users can use this model as an assistant for building Qiskit code or responding to Qiskit coding related instructions and questions.
35
+
36
+ ### Generation
37
+
38
+ This is a simple example of how to use **granite-8b-qiskit** model.
39
+
40
+ ```python
41
+ import torch
42
+ from transformers import AutoModelForCausalLM, AutoTokenizer
43
+ device = "cuda" # or "cpu"
44
+ model_path = "qiskit/granite-3.2-8b-qiskit"
45
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
46
+ # drop device_map if running on CPU
47
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
48
+ model.eval()
49
+ # change input text as desired
50
+ chat = [
51
+ { "role": "user", "content": "Build a random circuit with 5 qubits" },
52
+ ]
53
+ chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
54
+ # tokenize the text
55
+ input_tokens = tokenizer(input_text, return_tensors="pt")
56
+ # move tokenized inputs to device
57
+ for i in input_tokens:
58
+ input_tokens[i] = input_tokens[i].to(device)
59
+ # generate output tokens
60
+ output = model.generate(**input_tokens, max_new_tokens=128)
61
+ # decode output tokens into text
62
+ output = tokenizer.batch_decode(output)
63
+ # loop over the batch to print, in this example the batch size is 1
64
+ for i in output:
65
+ print(i)
66
+ ```
67
+
68
+ ## Training Data
69
+
70
+ - **Data Collection and Filtering:** Our code data is sourced from a combination of publicly available datasets (e.g., Code available on <https://github.com>), and additional synthetic data generated at IBM Quantum. We exclude code that is older than 2023.
71
+ - **Exact and Fuzzy Deduplication:** We use both exact and fuzzy deduplication to remove documents having (near) identical code content.
72
+ - **HAP, PII, Malware Filtering:** We rely on the base model ibm-granite/granite-8b-code-base for HAP and malware filtering from the initial datasets used in the context of the base model. We also make sure to redact Personally Identifiable Information (PII) in our datasets by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩).
73
+
74
+ ## Infrastructure
75
+
76
+ We train granite-3.2-8b-qiskit using IBM's super computing cluster (Vela) using NVIDIA A100 GPUs. The cluster provides a scalable and efficient infrastructure for training.
77
+
78
+ ## Ethical Considerations and Limitations
79
+
80
+ The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **granite-3.2-8b-qiskit** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **granite-3.2-8b-qiskit** model with ethical intentions and in a responsible way.
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|end_of_role|>": 49153,
3
+ "<|start_of_role|>": 49152,
4
+ "<|tool_call|>": 49154
5
+ }
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "GraniteForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.1,
7
+ "attention_multiplier": 0.0078125,
8
+ "bos_token_id": 0,
9
+ "embedding_multiplier": 12.0,
10
+ "eos_token_id": 0,
11
+ "hidden_act": "silu",
12
+ "hidden_size": 4096,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 12800,
15
+ "logits_scaling": 16.0,
16
+ "max_position_embeddings": 131072,
17
+ "mlp_bias": false,
18
+ "model_type": "granite",
19
+ "num_attention_heads": 32,
20
+ "num_hidden_layers": 40,
21
+ "num_key_value_heads": 8,
22
+ "pad_token_id": 0,
23
+ "residual_multiplier": 0.22,
24
+ "rms_norm_eps": 1e-05,
25
+ "rope_scaling": null,
26
+ "rope_theta": 10000000.0,
27
+ "tie_word_embeddings": true,
28
+ "torch_dtype": "float16",
29
+ "transformers_version": "4.51.3",
30
+ "use_cache": true,
31
+ "vocab_size": 49155
32
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.47.0.dev0"
7
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8087fb98d82f1dc37a38985ebc62fa8d5aa9988e9fa37b132ead7db335aeb8f6
3
+ size 4995640952
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47dd7debc408fa0058ce26552b682786179c05feb009586559854658023a6629
3
+ size 4970476512
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e3afe4cc142997256bceb280bc33890136c0b48f157fb2fcc97c7d3398144dd
3
+ size 4991431080
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad8923ffb5012f73361b2fef5b04e568ea11db9019bd860b3a8892109bba6f55
3
+ size 1384189728
model.safetensors.index.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata": {"mergekit_version": "0.1.2"}, "weight_map": {"model.embed_tokens.weight": "model-00001-of-00004.safetensors", "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.10.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.11.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.12.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.13.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.14.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.15.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.16.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.17.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.18.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.mlp.up_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00004.safetensors", "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.input_layernorm.weight": "model-00001-of-00004.safetensors", "model.layers.19.mlp.down_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00004.safetensors", "model.layers.19.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.19.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.19.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.19.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.19.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.19.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.2.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.2.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.2.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.20.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.20.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.21.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.21.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.22.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.22.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.23.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.23.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.24.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.25.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.26.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.27.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.28.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.29.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.29.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.3.mlp.down_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.mlp.gate_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.mlp.up_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.k_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.o_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.q_proj.weight": "model-00002-of-00004.safetensors", "model.layers.3.self_attn.v_proj.weight": "model-00002-of-00004.safetensors", "model.layers.30.input_layernorm.weight": "model-00002-of-00004.safetensors", "model.layers.30.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.30.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.30.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.30.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.31.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.31.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.31.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.32.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.32.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.33.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.33.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.34.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.34.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.35.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.35.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.36.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.36.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.37.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.37.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.38.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.38.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.39.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.39.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.4.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.4.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.5.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.5.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.5.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.5.mlp.up_proj.weight": "model-00003-of-00004.safetensors", "model.layers.5.post_attention_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.5.self_attn.k_proj.weight": "model-00003-of-00004.safetensors", "model.layers.5.self_attn.o_proj.weight": "model-00003-of-00004.safetensors", "model.layers.5.self_attn.q_proj.weight": "model-00003-of-00004.safetensors", "model.layers.5.self_attn.v_proj.weight": "model-00003-of-00004.safetensors", "model.layers.6.input_layernorm.weight": "model-00003-of-00004.safetensors", "model.layers.6.mlp.down_proj.weight": "model-00003-of-00004.safetensors", "model.layers.6.mlp.gate_proj.weight": "model-00003-of-00004.safetensors", "model.layers.6.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.6.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.7.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.7.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.8.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.8.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.input_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.9.mlp.down_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.mlp.gate_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.mlp.up_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.post_attention_layernorm.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.k_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.o_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.q_proj.weight": "model-00004-of-00004.safetensors", "model.layers.9.self_attn.v_proj.weight": "model-00004-of-00004.safetensors", "model.norm.weight": "model-00004-of-00004.safetensors"}}
special_tokens_map.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|start_of_role|>",
4
+ "<|end_of_role|>",
5
+ "<|tool_call|>"
6
+ ],
7
+ "bos_token": {
8
+ "content": "<|end_of_text|>",
9
+ "lstrip": false,
10
+ "normalized": false,
11
+ "rstrip": false,
12
+ "single_word": false
13
+ },
14
+ "eos_token": {
15
+ "content": "<|end_of_text|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "pad_token": {
22
+ "content": "<|end_of_text|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false
27
+ },
28
+ "unk_token": {
29
+ "content": "<|end_of_text|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false
34
+ }
35
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<|end_of_text|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<fim_prefix>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "<fim_middle>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "3": {
30
+ "content": "<fim_suffix>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "4": {
38
+ "content": "<fim_pad>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "5": {
46
+ "content": "<filename>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "6": {
54
+ "content": "<gh_stars>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "7": {
62
+ "content": "<issue_start>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "8": {
70
+ "content": "<issue_comment>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "9": {
78
+ "content": "<issue_closed>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "10": {
86
+ "content": "<jupyter_start>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "11": {
94
+ "content": "<jupyter_text>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "12": {
102
+ "content": "<jupyter_code>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "13": {
110
+ "content": "<jupyter_output>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "14": {
118
+ "content": "<empty_output>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": true
124
+ },
125
+ "15": {
126
+ "content": "<commit_before>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": true
132
+ },
133
+ "16": {
134
+ "content": "<commit_msg>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": true
140
+ },
141
+ "17": {
142
+ "content": "<commit_after>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": true
148
+ },
149
+ "18": {
150
+ "content": "<reponame>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": true
156
+ },
157
+ "49152": {
158
+ "content": "<|start_of_role|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": true
164
+ },
165
+ "49153": {
166
+ "content": "<|end_of_role|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": true
172
+ },
173
+ "49154": {
174
+ "content": "<|tool_call|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": true
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|start_of_role|>",
184
+ "<|end_of_role|>",
185
+ "<|tool_call|>"
186
+ ],
187
+ "bos_token": "<|end_of_text|>",
188
+ "chat_template": "{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content'] %}\n {%- set loop_messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"Knowledge Cutoff Date: April 2024.\nToday's Date: \" + strftime_now('%B %d, %Y') + \".\nYou are Granite, developed by IBM.\" %}\n {%- if tools and documents %}\n {%- set system_message = system_message + \" You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.\n\nWrite the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data.\" %}\n {%- elif tools %}\n {%- set system_message = system_message + \" You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.\" %}\n {%- elif documents %}\n {%- set system_message = system_message + \" Write the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data.\" %}\n {%- elif thinking %}\n {%- set system_message = system_message + \" You are a helpful AI assistant.\nRespond to every user query in a comprehensive and detailed way. You can write down your thoughts and reasoning process before responding. In the thought process, engage in a comprehensive cycle of analysis, summarization, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. In the response section, based on various attempts, explorations, and reflections from the thoughts section, systematically present the final solution that you deem correct. The response should summarize the thought process. Write your thoughts after 'Here is my thought process:' and write your response after 'Here is my response:' for each user query.\" %}\n {%- else %}\n {%- set system_message = system_message + \" You are a helpful AI assistant.\" %} \n {%- endif %}\n {%- if 'citations' in controls and documents %}\n {%- set system_message = system_message + '\n\nIn your response, use the symbols <co> and </co> to indicate when a fact comes from a document in the search result, e.g <co>0</co> for a fact from document 0. Afterwards, list all the citations with their corresponding documents in an ordered list.' %}\n {%- endif %}\n {%- if 'hallucinations' in controls and documents %}\n {%- set system_message = system_message + '\n\nFinally, after the response is written, include a numbered list of sentences from the response that are potentially hallucinated and not based in the documents.' %}\n {%- endif %}\n {%- set loop_messages = messages %}\n{%- endif %}\n{{- '<|start_of_role|>system<|end_of_role|>' + system_message + '<|end_of_text|>\n' }}\n{%- if tools %}\n {{- '<|start_of_role|>tools<|end_of_role|>' }}\n {{- tools | tojson(indent=4) }}\n {{- '<|end_of_text|>\n' }}\n{%- endif %}\n{%- if documents %}\n {{- '<|start_of_role|>documents<|end_of_role|>' }}\n {%- for document in documents %}\n {{- 'Document ' + loop.index0 | string + '\n' }}\n {{- document['text'] }}\n {%- if not loop.last %}\n {{- '\n\n'}}\n {%- endif%}\n {%- endfor %}\n {{- '<|end_of_text|>\n' }}\n{%- endif %}\n{%- for message in loop_messages %}\n {{- '<|start_of_role|>' + message['role'] + '<|end_of_role|>' + message['content'] + '<|end_of_text|>\n' }}\n {%- if loop.last and add_generation_prompt %}\n {{- '<|start_of_role|>assistant' }}\n {%- if controls %}\n {{- ' ' + controls | tojson()}}\n {%- endif %}\n {{- '<|end_of_role|>' }}\n {%- endif %}\n{%- endfor %}",
189
+ "clean_up_tokenization_spaces": true,
190
+ "eos_token": "<|end_of_text|>",
191
+ "errors": "replace",
192
+ "extra_special_tokens": {},
193
+ "model_max_length": 9223372036854775807,
194
+ "pad_token": "<|end_of_text|>",
195
+ "padding_side": "left",
196
+ "tokenizer_class": "GPT2Tokenizer",
197
+ "unk_token": "<|end_of_text|>",
198
+ "vocab_size": 49152
199
+ }