Phi-3 Mini Reverse Fine-tuned for Payments Domain
This is a reverse fine-tuned version of Microsoft's Phi-3-Mini-4k-Instruct model, adapted for extracting structured payment metadata from natural language descriptions using LoRA (Low-Rank Adaptation).
Model Description
This model converts natural language payment descriptions into structured, machine-readable metadata. It performs the opposite task of the forward model - instead of generating human-friendly text, it extracts structured data that can be processed by payment APIs and applications.
Related Models
Forward Model (Companion): aamanlamba/phi3-payments-finetune
- Converts structured metadata โ natural language
- Use together for round-trip validation
Training Data
The model was trained on a dataset of 500+ synthetic payment transactions where:
- Input: Natural language payment descriptions
- Output: Structured metadata in
action(field[value], ...)format
Transaction types covered:
- Standard payments (ACH, wire transfer, credit/debit card)
- Refunds (full and partial)
- Chargebacks and disputes
- Failed/declined transactions
- International transfers with currency conversion
- Transaction fees
- Recurring payments/subscriptions
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = "microsoft/Phi-3-mini-4k-instruct"
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapters (reverse model)
model = PeftModel.from_pretrained(model, "aamanlamba/phi3-payments-reverse-finetune")
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
# Extract structured data
prompt = """<|system|>
You are a financial data extraction assistant that converts natural language payment descriptions into structured metadata that can be processed by payment applications.<|end|>
<|user|>
Extract structured payment information from the following description:
Your payment of USD 1,500.00 to Global Supplies Inc via wire transfer was successfully completed on 2024-10-27.<|end|>
<|assistant|>
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.3, # Lower temperature for more deterministic extraction
top_p=0.9,
do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
structured_data = response.split("<|assistant|>")[-1].strip()
print(structured_data)
Expected output:
inform(transaction_type[payment], amount[1500.00], currency[USD], receiver[Global Supplies Inc], status[completed], method[wire_transfer], date[2024-10-27])
Parsing the Output
import re
def parse_structured_data(structured_str: str) -> dict:
"""Parse structured payment data into a dictionary"""
action_match = re.match(r'(\w+)\((.*)\)', structured_str)
if not action_match:
return None
action_type = action_match.group(1)
fields_str = action_match.group(2)
fields = {}
field_pattern = r'(\w+)\[(.*?)\]'
for match in re.finditer(field_pattern, fields_str):
field_name = match.group(1)
field_value = match.group(2)
# Convert numeric values
if field_name in ['amount', 'refund_amount', 'fee_amount', 'exchange_rate']:
try:
field_value = float(field_value)
except ValueError:
pass
fields[field_name] = field_value
return {
'action_type': action_type,
'fields': fields
}
# Use it
parsed = parse_structured_data(structured_data)
print(parsed)
# Output: {'action_type': 'inform', 'fields': {'transaction_type': 'payment', 'amount': 1500.0, ...}}
Training Details
Training Configuration
- Base Model: microsoft/Phi-3-mini-4k-instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Task Direction: Natural Language โ Structured Data (Reverse)
- LoRA Rank: 16
- LoRA Alpha: 32
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Quantization: 8-bit (training), float16 (inference)
- Training Epochs: 3
- Learning Rate: 2e-4
- Batch Size: 1 (with 8 gradient accumulation steps)
- Hardware: NVIDIA RTX 3060 (12GB VRAM)
- Training Time: ~35-45 minutes
Training Loss
- Initial Loss: ~3.5-4.0
- Final Loss: ~0.8-1.2
- Validation Loss: ~1.0-1.3
- Extraction Accuracy: ~90-95% on validation set
Model Size
- LoRA Adapter Size: ~15MB (only the adapter weights, not the full model)
- Full Model Size: ~7GB (when combined with base model)
Supported Transaction Types
- Payments: Standard payment transactions with various methods
- Refunds: Full and partial refunds
- Chargebacks: Dispute and chargeback processing
- Failed Payments: Declined or failed transactions with reasons
- International Transfers: Cross-border payments with currency conversion
- Fees: Transaction and processing fees
- Recurring Payments: Subscriptions and scheduled payments
- Reversals: Payment reversals and adjustments
Output Format
The model extracts data in this structured format:
action_type(field1[value1], field2[value2], ...)
Action Types:
inform: Informational transactions (payments, refunds, transfers)alert: Alerts and notifications (failures, chargebacks)
Common Fields:
transaction_type: Type of transactionamount: Transaction amount (numeric)currency: Currency code (USD, EUR, GBP, etc.)sender/receiver/merchant: Party namesstatus: Transaction status (completed, pending, failed, etc.)method: Payment method (credit_card, ACH, wire_transfer, etc.)date: Transaction date (YYYY-MM-DD)reason: Failure/chargeback reason (for alerts)
Use Cases
1. Conversational Payment Interfaces
Extract payment details from user messages:
User: "I want to send $500 to John via PayPal"
Extracted: inform(transaction_type[payment], amount[500], currency[USD], receiver[John], method[PayPal])
2. Email Parsing
Extract transaction data from payment notification emails automatically.
3. Voice Payment Systems
Convert spoken payment descriptions into structured API calls.
4. Payment API Integration
Transform natural language payment requests into API-ready parameters.
Limitations
- Trained on synthetic data - may require additional fine-tuning for production use
- Optimized for English language only
- Best performance on transaction patterns similar to training data
- Output format is custom - requires parsing (see example above)
- Not suitable for handling real financial transactions without validation
- Lower temperature (0.3) recommended for consistent extraction
Ethical Considerations
- This model was trained on synthetic, anonymized data only
- Does not contain any real customer PII or transaction data
- Should be validated for accuracy before production deployment
- Implement validation and error handling for extracted data
- Consider regulatory compliance (PCI-DSS, GDPR, etc.) in your jurisdiction
- Always verify extracted financial data before processing
Intended Use
Primary Use Cases:
- Extracting transaction data from natural language descriptions
- Building conversational payment bots
- Parsing payment notifications and emails
- Converting user requests to API parameters
- Training and demonstration purposes
- Research in financial NLP and information extraction
Out of Scope:
- Direct transaction processing without validation
- Real-time financial systems without error handling
- Compliance-critical data extraction
- Medical or legal payment processing
Performance Notes
- Inference Speed: ~2-3 seconds per extraction on RTX 3060
- Temperature: Use 0.1-0.3 for deterministic extraction
- Validation: Always validate output format and field values
- Error Handling: Implement fallbacks for malformed outputs
How to Cite
If you use this model in your research or application, please cite:
@misc{phi3-payments-reverse-finetuned,
author = {aamanlamba},
title = {Phi-3 Mini Reverse Fine-tuned for Payments Domain},
year = {2024},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/aamanlamba/phi3-payments-reverse-finetune}}
}
Training Code
The complete training code and dataset generation scripts are available on GitHub:
- Repository: github.com/aamanlamba/phi3-tune-payments
- Branch:
reverse-structured-extraction(this model) - Includes: Reverse dataset generator, training scripts, testing utilities, parsing examples
Acknowledgements
- Base model: Microsoft Phi-3-Mini-4k-Instruct
- Fine-tuning method: LoRA: Low-Rank Adaptation of Large Language Models
- Training framework: HuggingFace Transformers + PEFT
- Inspired by: NVIDIA AI Workbench Phi-3 Fine-tuning Example
License
This model is released under the MIT license, compatible with the base Phi-3 model license.
Contact
For questions or issues, please open an issue on the GitHub repository or contact the author.
Note: This is a reverse model for structured data extraction. For generating natural language from structured data, see the companion forward model.
Model tree for aamanlamba/phi3-payments-reverse-finetune
Base model
microsoft/Phi-3-mini-4k-instruct