aliRafik commited on
Commit
33da844
·
verified ·
1 Parent(s): 37cc65d
Files changed (1) hide show
  1. README.md +263 -19
README.md CHANGED
@@ -16,25 +16,269 @@ This makes it **production-ready**: no need to separately load base + adapters.
16
 
17
  ---
18
 
19
- ### Intended Use
20
- - Direct deployment without PEFT
21
- - Invoice extraction in pipelines expecting a standalone Hugging Face model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- ---
24
 
 
25
 
26
- ### Training Details
27
- - **Base model**: `numind/NuExtract-2.0-4B`
28
- - **Method**: LoRA
29
- - **LoRA Config**:
30
- - Rank: 8
31
- - Alpha: 32
32
- - Dropout: 0.1
33
- - Target modules: `q_proj`, `v_proj`
34
- - **Epochs**: 10
35
- - **Batch size**: 2
36
- - **Learning rate**: 1e-5
37
- - **Precision**: bfloat16
38
- - **Gradient checkpointing**: Enabled
39
-
40
- ---
 
16
 
17
  ---
18
 
19
+ ## Intended Use
20
+
21
+ - Extracting structured JSON fields from invoice images:
22
+ - Invoice number, date
23
+ - Seller/client details
24
+ - Tax IDs, IBAN
25
+ - Item descriptions, prices, VAT
26
+ - Totals (net, VAT, gross)
27
+ - Not intended for general document OCR outside invoices.
28
+
29
+ ## Training Details
30
+
31
+ - **Base model**: Qwen/Qwen2.5-VL-3B-Instruct
32
+ - **Framework**: Hugging Face TRL (SFTTrainer) with PEFT/LoRA
33
+ - **LoRA config**:
34
+ - ***Rank (r)***: 8
35
+ - ***Alpha***: 32
36
+ - ***Target modules***: q_proj, v_proj
37
+ - ***Dropout***: 0.1
38
+ - **Epochs**: 10
39
+ - **Batch size**: 2
40
+ - **Learning rate**: 1e-5
41
+ - **Precision**: bfloat16
42
+ - **Gradient accumulation**: 4
43
+ - **Scheduler**: Constant LR
44
+ - **Max sequence length**: 1024
45
+ - **Gradient checkpointing**: Enabled
46
+ - **Trainable parameters**: ~1.8M (0.05% of 3.75B total)
47
+
48
+
49
+ ## Usage
50
+
51
+ ### Installation
52
+
53
+ ```bash
54
+ pip install transformers torch datasets pillow
55
+ ```
56
+
57
+ ### Load Model and Processor
58
+
59
+ ```python
60
+ import torch
61
+ from transformers import AutoProcessor, AutoModelForVision2Seq
62
+
63
+ model_name = "aliRafik/invoices-donut-finetuned-Lora-merged"
64
+
65
+ model = AutoModelForVision2Seq.from_pretrained(
66
+ model_name,
67
+ trust_remote_code=True,
68
+ torch_dtype=torch.bfloat16, # Optional: Use float32 if bfloat16 causes issues
69
+ attn_implementation="flash_attention_2", # Requires Ampere+ GPU & torch >= 2.0
70
+ device_map="auto"
71
+ )
72
+
73
+ processor = AutoProcessor.from_pretrained(
74
+ model_name,
75
+ trust_remote_code=True,
76
+ padding_side='left',
77
+ use_fast=True
78
+ )
79
+ ```
80
+
81
+
82
+ ### Define Extraction Template
83
+
84
+ ```python
85
+ template = """
86
+ {
87
+ "header": {
88
+ "invoice_no": "string",
89
+ "invoice_date": "date-time",
90
+ "seller": "string",
91
+ "client": "string",
92
+ "seller_tax_id": "string",
93
+ "client_tax_id": "string",
94
+ "iban": "string"
95
+ },
96
+ "items": [
97
+ {
98
+ "item_desc": "string",
99
+ "item_qty": "number",
100
+ "item_net_price": "number",
101
+ "item_net_worth": "number",
102
+ "item_vat": "number",
103
+ "item_gross_worth": "number"
104
+ }
105
+ ],
106
+ "summary": {
107
+ "total_net_worth": "number",
108
+ "total_vat": "number",
109
+ "total_gross_worth": "number"
110
+ }
111
+ }
112
+ """
113
+ ```
114
+ ### Test on Sample from Dataset
115
+
116
+ ```python
117
+ from datasets import load_dataset
118
+ import json
119
+ from qwen_vl_utils import process_vision_info
120
+
121
+ # Load the dataset
122
+ dataset = load_dataset("katanaml-org/invoices-donut-data-v1")
123
+
124
+ # Select a sample (e.g., index 0)
125
+ sample = dataset['train'][0]
126
+ image = sample['image']
127
+ ground_truth = sample['ground_truth']
128
+
129
+ print(json.loads(ground_truth))
130
+
131
+ # Prepare message
132
+ messages = [
133
+ {"role": "user", "content": [{"type": "image", "image": image}]}
134
+ ]
135
+
136
+ # Process vision info
137
+ image_inputs, _ = process_vision_info(messages)
138
+
139
+ # Apply chat template
140
+ text = processor.tokenizer.apply_chat_template(
141
+ messages,
142
+ template=template,
143
+ tokenize=False,
144
+ add_generation_prompt=True
145
+ )
146
+
147
+ # Prepare inputs
148
+ inputs = processor(
149
+ text=[text],
150
+ images=image_inputs,
151
+ padding=True,
152
+ return_tensors="pt"
153
+ ).to(model.device)
154
+
155
+ # Generation config
156
+ generation_config = {
157
+ "do_sample": False,
158
+ "num_beams": 1,
159
+ "max_new_tokens": 2048
160
+ }
161
+
162
+ # Generate
163
+ generated_ids = model.generate(**inputs, **generation_config)
164
+ generated_ids_trimmed = [
165
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
166
+ ]
167
+
168
+ output_text = processor.batch_decode(
169
+ generated_ids_trimmed,
170
+ skip_special_tokens=True,
171
+ clean_up_tokenization_spaces=False
172
+ )
173
+
174
+ # Parse and print
175
+ try:
176
+ extracted_data = json.loads(output_text[0])
177
+ print("Extracted Data:", extracted_data)
178
+ except json.JSONDecodeError:
179
+ print("Raw Output:", output_text[0])
180
+
181
+ # Compare with ground truth
182
+ gt_parsed = json.loads(ground_truth)['gt_parse']
183
+ print("Ground Truth:", gt_parsed)
184
+
185
+
186
+ ```
187
+ ### Test on Unseen Data (Custom Image)
188
+ ```python
189
+ from PIL import Image
190
+ from io import BytesIO
191
+ import requests
192
+
193
+ # Load from local path
194
+ image_path = "/content/image.jpg" # Replace with your path
195
+ image = Image.open(image_path)
196
+
197
+ # Or load from URL
198
+ # image_url = "https://example.com/your_invoice.jpg"
199
+ # response = requests.get(image_url)
200
+ # image = Image.open(BytesIO(response.content))
201
+
202
+ # Use same inference code as above
203
+
204
+
205
+ ```
206
+
207
+ ## Example Results
208
+
209
+ #### Input Image:
210
+
211
+ ![Invoice Extraction Example](https://th.bing.com/th/id/OIP.u5Uh7wUsLTy4zqUMOWuT-QHaJl?w=186&h=242&c=7&r=0&o=5&pid=1.7)
212
+
213
+ #### Extracted Data:
214
+
215
+ ```python
216
+
217
+ {
218
+ "header": {
219
+ "invoice_no": "49565075",
220
+ "invoice_date": "2019-10-28",
221
+ "seller": "Kane-Morgan 968 Carr Mission Apt. 320 Bernardville, VA 28211",
222
+ "client": "Garcia Inc 445 Haas Viaduct Suite 454 Michaelhaven, LA 32852",
223
+ "seller_tax_id": "964-95-3813",
224
+ "client_tax_id": "909-75-5482",
225
+ "iban": "GB73WCJ55232646970614"
226
+ },
227
+ "items": [
228
+ {
229
+ "item_desc": "Anthropologie Gold Elegant Swan Decorative Metal Bottle Stopper Wine Saver",
230
+ "item_qty": 3.0,
231
+ "item_net_price": 19.98,
232
+ "item_net_worth": 59.94,
233
+ "item_vat": 10.0,
234
+ "item_gross_worth": 65.93
235
+ },
236
+ {
237
+ "item_desc": "Lolita Happy Retirement Wine Glass 15 Ounce GLS11-5534H",
238
+ "item_qty": 1.0,
239
+ "item_net_price": 8.0,
240
+ "item_net_worth": 8.0,
241
+ "item_vat": 10.0,
242
+ "item_gross_worth": 8.8
243
+ },
244
+ {
245
+ "item_desc": "Lolita \"Congratulations\" Hand Painted and Decorated Wine Glass NIB",
246
+ "item_qty": 1.0,
247
+ "item_net_price": 20.0,
248
+ "item_net_worth": 20.0,
249
+ "item_vat": 10.0,
250
+ "item_gross_worth": 22.0
251
+ }
252
+ ],
253
+ "summary": {
254
+ "total_net_worth": 87.94,
255
+ "total_vat": 8.79,
256
+ "total_gross_worth": 96.73
257
+ }
258
+ }
259
+
260
+ ```
261
+ ## License
262
+ #### Apache-2.0
263
+ tags:
264
+ ###### vision
265
+ ###### document-understanding
266
+ ###### invoice-processing
267
+ ###### donut
268
+ ###### qwen
269
 
 
270
 
271
+ ## Citations
272
 
273
+ Cite TRL as:
274
+
275
+ ```bibtex
276
+ @misc{vonwerra2022trl,
277
+ title = {{TRL: Transformer Reinforcement Learning}},
278
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
279
+ year = 2020,
280
+ journal = {GitHub repository},
281
+ publisher = {GitHub},
282
+ howpublished = {\url{https://github.com/huggingface/trl}}
283
+ }
284
+ ```