The dataset viewer is not available for this dataset.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
A VLM Framework to Optimize the Analysis of Analog Circuit Layouts
ICML 2026 Submission - Under Review
This repository contains the dataset presented in the paper "A VLM Framework to Optimize the Analysis of Analog Circuit Layouts", along with the code for training and evaluating Visual Language Models (VLMs) on analog circuit layouts analysis tasks.
The project addresses the challenge of interpreting technical diagrams by benchmarking VLMs on tasks ranging from single device identification to component counting in complex mixed circuits.
Dataset Overview
The dataset comprises over 30,000 circuits and 77,000+ Question-Answer pairs, organized into a comprehensive benchmark suite.
Circuit Categories
- Single Devices (19,997 images): PMOS, NMOS, Capacitors, Resistors.
- Base Circuits (5,894 images): Ahuja OTA, Gate Driver, HPF, LDO, LPF, Miller OTA.
- Mixed Circuits (4,140 images): Complex combinations of base circuits.
Benchmark Tasks
The dataset defines 5 core tasks for evaluation:
| Task | Description | Size |
|---|---|---|
| Task A | Single device identification | 19,997 samples |
| Task B | Base circuit identification | 5,894 samples |
| Task C | Component counting (base circuits) | 27,475 samples |
| Task D | Component counting (mixed circuits) | 19,848 samples |
| Task E | Base circuit identification in mixed circuits | 4,140 samples |
Repository Structure
Once code.zip and dataset.zip have been unzipped, the structure is as follows:
.
├── code/ # Source code for fine-tuning and inference
├── base_circuits/ # Base circuit datasets and templates
├── mixed_circuits/ # Mixed circuit datasets
├── single_devices/ # Single device datasets
└── tasks/ # Task definitions and data splits
Getting Started
Prerequisites
All execution scripts are located in the code/ directory.
cd code
pip install -r requirements.txt
Fine-Tuning
The repository provides a sequential fine-tuning launcher to handle dataset ablations and multiple tasks.
Basic Usage:
# Dry-run to view planned training jobs
python VLM_finetune/run_ablation_sequential_ft.py --dry_run
# Train Task A (Single device identification) with 100% of dataset
python VLM_finetune/run_ablation_sequential_ft.py --task a1 --perc 100
Advanced Usage: Train multiple tasks with specific data percentages:
python VLM_finetune/run_ablation_sequential_ft.py --tasks a1,b1,c1 --percs 25,50,75,100
Evaluation
The inference pipeline supports evaluating both base models and fine-tuned LoRA adapters.
Batch Evaluation (Ablation Study): Evaluate many adapters across different tasks and splits:
python VLM_inference/run_ft_eval_ablation.py \
--splits-root /path/to/dataset/ablation_splits \
--adapter-root /path/to/outputs/finetune_lora \
--cache-dir /path/to/cache
Result Reorganization: Map raw evaluation results from training tasks (A1/B1/C1) to the final benchmark tasks (A-E) and compute aggregated metrics:
python reorganize_results.py \
--input-root /path/to/raw_results \
--output-root /path/to/final_results
Single Task Evaluation: Run inference on a single task/circuit:
# Evaluate Task A (Task A1)
python VLM_inference/test_base_models/run_ft_eval_update.py --task a1 --num-samples 200
# Evaluate with a specific adapter
python VLM_inference/test_base_models/run_ft_eval_update.py \
--task a1 \
--num-samples 200 \
--adapter /path/to/adapter/checkpoint
- Downloads last month
- 1