psynote123 commited on
Commit
f3a5663
·
verified ·
1 Parent(s): 11fbe4d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +165 -3
README.md CHANGED
@@ -1,3 +1,165 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
5
+ base_model_relation: quantized
6
+ pipeline_tag: text2text-generation
7
+ ---
8
+
9
+ # Elastic model: DeepSeek-R1-Distill-Qwen-7B. Fastest and most flexible models for self-serving.
10
+
11
+ Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
12
+
13
+ * __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
14
+
15
+ * __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
16
+
17
+ * __M__: Faster model, with accuracy degradation less than 1.5%.
18
+
19
+ * __S__: The fastest model, with accuracy degradation less than 2%.
20
+
21
+
22
+ __Goals of elastic models:__
23
+
24
+ * Provide flexibility in cost vs quality selection for inference
25
+ * Provide clear quality and latency benchmarks
26
+ * Provide interface of HF libraries: transformers and diffusers with a single line of code
27
+ * Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
28
+ * Provide the best models and service for self-hosting.
29
+
30
+ > It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.
31
+
32
+ ![Performance Graph](images/performance_graph.png)
33
+ -----
34
+
35
+ ## Inference
36
+
37
+ To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
38
+
39
+ ```python
40
+ import torch
41
+ from transformers import AutoTokenizer
42
+ from elastic_models.transformers import AutoModelForCausalLM
43
+
44
+ # Currently we require to have your HF token
45
+ # as we use original weights for part of layers and
46
+ # model confugaration as well
47
+ model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B"
48
+ hf_token = ''
49
+ device = torch.device("cuda")
50
+
51
+ # Create mode
52
+ tokenizer = AutoTokenizer.from_pretrained(
53
+ model_name, token=hf_token
54
+ )
55
+ model = AutoModelForCausalLM.from_pretrained(
56
+ model_name,
57
+ token=hf_token,
58
+ torch_dtype=torch.bfloat16,
59
+ attn_implementation="sdpa",
60
+ mode='S'
61
+ ).to(device)
62
+ model.generation_config.pad_token_id = tokenizer.eos_token_id
63
+
64
+ # Inference simple as transformers library
65
+ prompt = "Describe basics of DNNs quantization."
66
+ messages = [
67
+ {
68
+ "role": "system",
69
+ "content": "You are a search bot, answer on user text queries."
70
+ },
71
+ {
72
+ "role": "user",
73
+ "content": prompt
74
+ }
75
+ ]
76
+
77
+ chat_prompt = tokenizer.apply_chat_template(
78
+ messages, add_generation_prompt=True, tokenize=False
79
+ )
80
+
81
+ inputs = tokenizer(chat_prompt, return_tensors="pt")
82
+ inputs.to(device)
83
+
84
+ with torch.inference_mode:
85
+ generate_ids = model.generate(**inputs, max_length=500)
86
+
87
+ input_len = inputs['input_ids'].shape[1]
88
+ generate_ids = generate_ids[:, input_len:]
89
+ output = tokenizer.batch_decode(
90
+ generate_ids,
91
+ skip_special_tokens=True,
92
+ clean_up_tokenization_spaces=False
93
+ )[0]
94
+
95
+ # Validate answer
96
+ print(f"# Q:\n{prompt}\n")
97
+ print(f"# A:\n{output}\n")
98
+ ```
99
+
100
+ __System requirements:__
101
+ * GPUs: H100, L40s
102
+ * CPU: AMD, Intel
103
+ * Python: 3.10-3.12
104
+
105
+
106
+ To work with our models just run these lines in your terminal:
107
+
108
+ ```shell
109
+ pip install thestage
110
+ pip install elastic_models[nvidia]\
111
+ --index-url https://thestage.jfrog.io/artifactory/api/pypi/pypi-thestage-ai-production/simple\
112
+ --extra-index-url https://pypi.nvidia.com\
113
+ --extra-index-url https://pypi.org/simple
114
+
115
+ pip install flash_attn==2.7.3 --no-build-isolation
116
+ pip uninstall apex
117
+ ```
118
+
119
+ Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
120
+
121
+ ```shell
122
+ thestage config set --api-token <YOUR_API_TOKEN>
123
+ ```
124
+
125
+ Congrats, now you can use accelerated models!
126
+
127
+ ----
128
+
129
+ ## Benchmarks
130
+
131
+ Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
132
+
133
+ ### Quality benchmarks
134
+
135
+ | Metric/Model | S | M | L | XL | Original | W8A8, int8 |
136
+ |---------------|---|---|---|----|----------|------------|
137
+ | arc_challenge | 41.00 | 40.90 | 42.50 | 42.20 | 42.20 | - | - |
138
+ | mmlu | 52.00 | 53.80 | 55.10 | 55.20 | 55.20 | - | - |
139
+ | piqa | 68.40 | 70.60 | 70.70 | 70.50 | 70.50 | - | - |
140
+ | winogrande | 60.10 | 60.90 | 60.20 | 60.10 | 60.10 | - | - |
141
+
142
+
143
+
144
+ * **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
145
+ * **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
146
+ * **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
147
+ * **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
148
+
149
+ ### Latency benchmarks
150
+
151
+ __100 input/300 output; tok/s:__
152
+
153
+ | GPU/Model | S | M | L | XL | Original | W8A8, int8 |
154
+ |-----------|-----|---|---|----|----------|------------|
155
+ | H100 | -1 | -1 | -1 | -1 | -1 | -1 | - |
156
+ | L40S | 78 | 69 | 60 | 47 | 43 | 78 | - |
157
+
158
+
159
+
160
+ ## Links
161
+
162
+ * __Platform__: [app.thestage.ai](app.thestage.ai)
163
+ * __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
164
+ <!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
165
+ * __Contact email__: contact@thestage.ai