Datasets:
image
imagewidth (px) 1.02k
2.05k
| label
class label 3
classes |
|---|---|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
|
0annotation_masks
|
Paper: https://arxiv.org/abs/2508.10171 Project Page: https://synspill.vercel.app
SynSpill Reproduction Guide
Create a conda environment and install the dependencies (Python 3.12).
1. Environment Setup
# Clone and setup
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
# Install dependencies
pip install -r requirements.txt
# Install PyTorch (NVIDIA GPU)
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128
```bash
# Manual ComfyUI Manager installation
cd custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
cd ..
# Install custom nodes
./install_custom_nodes.sh
2. Download Required Models
Model Directory Structure
models/
βββ checkpoints/ # Base diffusion models (.safetensors)
βββ vae/ # VAE models
βββ loras/ # LoRA weights
βββ controlnet/ # ControlNet models
βββ clip_vision/ # CLIP vision models
βββ ipadapter/ # IP-Adapter models
Required Models for Research Reproduction
Base Models:
# Create directories
mkdir -p models/checkpoints models/loras models/ipadapter models/clip_vision
# SDXL-Turbo Inpainting Model
wget -P models/checkpoints/ https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors
IP-Adapter Components:
# IP Composition Adapter - Download specific files
wget -P models/ipadapter/ https://huggingface.co/ostris/ip-composition-adapter/resolve/main/ip_plus_composition_sd15.safetensors
# Or for SDXL version:
wget -P models/ipadapter/ https://huggingface.co/ostris/ip-composition-adapter/resolve/main/ip_plus_composition_sdxl.safetensors
# CLIP ViT-H/14 LAION-2B - Download model files
wget -P models/clip_vision/ https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin
wget -P models/clip_vision/ https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/config.json
Manual Downloads Required:
- Interior Scene XL: Visit https://civitai.com/models/715747/interior-scene-xl and download the model file to
models/checkpoints/ - Factory Model (optional): Visit https://civitai.com/models/77373/factory for additional scene generation
Note: Some models from CivitAI require account registration and manual download due to licensing agreements.
3. Custom Nodes Installation
Automated Installation (Recommended)
We provide a comprehensive installation script that clones all the custom nodes used in this research:
# Make the script executable (if not already)
chmod +x install_custom_nodes.sh
# Run the installation script
./install_custom_nodes.sh
Installed Custom Nodes Include:
- ComfyUI Manager - Essential for managing nodes and models
- ComfyUI IPAdapter Plus - IP-Adapter functionality for composition
- ComfyUI Impact Pack/Subpack - Advanced image processing and segmentation
- ComfyUI Inspire Pack - Additional workflow utilities
- ComfyUI Custom Scripts - Workflow enhancements and UI improvements
- ComfyUI Dynamic Prompts - Dynamic prompt generation
- ComfyUI KJNodes - Various utility nodes for image processing
- ComfyUI Ultimate SD Upscale - Advanced upscaling capabilities
- ComfyUI GGUF - Support for GGUF model format
- ComfyUI Image Filters - Comprehensive image filtering nodes
- ComfyUI Depth Anything V2 - Depth estimation capabilities
- ComfyUI RMBG - Background removal functionality
- ComfyUI FizzNodes - Animation and scheduling nodes
- RGThree ComfyUI - Advanced workflow management
- WAS Node Suite - Comprehensive collection of utility nodes
- And more...
4. Using ComfyUI Manager
After installing ComfyUI Manager, you can easily install missing nodes and models:
# Start ComfyUI first
python main.py --listen 0.0.0.0 --port 8188
In the ComfyUI Web Interface:
- Access Manager: Click the "Manager" button in the ComfyUI interface
- Install Missing Nodes:
- Load any workflow that uses custom nodes
- Click "Install Missing Custom Nodes" to automatically install required nodes
- Install Models:
- Go to "Model Manager" tab
- Search and install models directly from the interface
- Supports HuggingFace, CivitAI, and other model repositories
Alternative Model Installation via Manager:
- Checkpoints: Search for "SDXL" or "Stable Diffusion" models
- IP-Adapters: Search for "IP-Adapter" in the model manager
- ControlNets: Browse and install ControlNet models as needed
- LoRAs: Install LoRA models directly through the interface
Benefits of using ComfyUI Manager:
- Automatic dependency resolution
- One-click installation of missing nodes
- Model browser with direct download
- Version management
- Automatic updates
5. Start ComfyUI Server
# Local access
python main.py
# Network access (for cluster/remote)
python main.py --listen 0.0.0.0 --port 8188
# With latest frontend
python main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest
Access at: http://localhost:8188
Research-Specific Features
Custom Guidance Methods
- FreSca: Frequency-dependent scaling guidance (
comfy_extras/nodes_fresca.py) - PAG: Perturbed Attention Guidance (
comfy_extras/nodes_pag.py) - SAG: Self Attention Guidance (
comfy_extras/nodes_sag.py) - SLG: Skip Layer Guidance (
comfy_extras/nodes_slg.py) - APG: Adaptive Patch Guidance (
comfy_extras/nodes_apg.py) - Mahiro: Direction-based guidance scaling (
comfy_extras/nodes_mahiro.py)
Advanced Sampling
- Custom samplers and schedulers (
comfy_extras/nodes_custom_sampler.py) - Token merging optimization (
comfy_extras/nodes_tomesd.py) - Various diffusion model sampling methods
Research Configuration
Key Hyperparameters for Synthetic Image Generation
The following table summarizes the key hyperparameters used in our synthetic image generation pipeline:
| Parameter | Value / Configuration |
|---|---|
| Scene Generation Specifics | |
| Base Model | Stable Diffusion XL 1.0 |
| Image Resolution | 1024 Γ 1024 |
| Sampler | DDPM-SDE-2m-GPU |
| Scheduler | Karras |
| Sampling Steps | 64 |
| CFG Scale | 8 |
| LoRA Strength | 0.2β0.4 |
| IP-Adapter | IP Composition+CLIP-ViT-H |
| IP-Adapter Strength | 0.6 |
| Inpainting Specifics | |
| Inpainting Model | SDXL-Turbo Inpainting |
| Differential Diffusion | Enabled |
| Mask Feathering | 50 pixels |
| Mask Opacity | 75% |
| Denoise Strength | 0.5-0.6 |
Model References
- Interior Scene XL: https://civitai.com/models/715747/interior-scene-xl
- SDXL-Turbo: https://huggingface.co/stabilityai/sdxl-turbo
- IP Composition Adapter: https://huggingface.co/ostris/ip-composition-adapter
- CLIP ViT-H/14 LAION-2B: https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Configuration in ComfyUI
When setting up workflows in ComfyUI, ensure the following nodes are configured with the specified parameters:
KSampler/KSampler Advanced:
- Steps: 64
- CFG: 8.0
- Sampler: ddpm_sde_gpu (or ddpm_sde if GPU version unavailable)
- Scheduler: karras
LoRA Loader:
- Strength Model: 0.2-0.4 range
- Strength CLIP: 0.2-0.4 range
IPAdapter:
- Weight: 0.6
- Weight Type: composition (for IP Composition Adapter)
Inpainting Specific:
- Denoise: 0.5-0.6
- Use differential diffusion when available
- Mask feathering: 50 pixels
- Mask opacity: 0.75
Running Experiments
Load Research Workflows
- Navigate to ComfyUI interface
- Load workflows from
user/default/workflows/:IMG-SDTune-Lightning-RD.jsonInpaint.jsonIP-Adapter.jsonTest Factory.json
Using ComfyUI Manager with Workflows:
- When loading workflows, if nodes are missing, ComfyUI Manager will show a popup
- Click "Install Missing Custom Nodes" to automatically install required nodes
- Restart ComfyUI after installation
- Reload the workflow to verify all nodes are available
For Cluster Usage
See CLUSTER_ACCESS_README.md for detailed SLURM cluster setup with SSH tunneling.
API Usage
# Basic API example
python script_examples/basic_api_example.py
# WebSocket examples
python script_examples/websockets_api_example.py
Troubleshooting
CUDA Issues:
pip uninstall torch
pip install torch --extra-index-url https://download.pytorch.org/whl/cu128
Memory Issues:
python main.py --cpu # CPU fallback
python main.py --force-fp32 # Lower precision
Custom Nodes Not Loading:
- Check
custom_nodes/directory - Restart ComfyUI after installing new nodes
- Check logs for dependency issues
- Use ComfyUI Manager to reinstall problematic nodes
- Try "Update All" in ComfyUI Manager for compatibility fixes
ComfyUI Manager Issues:
- If Manager button doesn't appear, restart ComfyUI
- Check that ComfyUI-Manager is properly cloned in
custom_nodes/ - For model download failures, try manual wget commands provided above
- Clear browser cache if Manager interface doesn't load properly
Custom Nodes Installation Script Issues:
- If script fails with permission errors, run:
chmod +x install_custom_nodes.sh - For network issues, try running the script again (it will skip existing installations)
- If specific nodes fail to clone, check your internet connection and GitHub access
- Some nodes may require additional dependencies - check individual node README files
- After running the script, restart ComfyUI to load all new nodes
Directory Structure
After setup, your ComfyUI directory should look like this:
ComfyUI/
βββ models/
β βββ checkpoints/
β β βββ [SDXL models]
β β βββ [Inpainting models]
β βββ loras/
β β βββ [LoRA models]
β βββ controlnet/
β β βββ [ControlNet models]
β βββ ipadapter/
β β βββ [IP-Adapter models]
β βββ [other model directories]
βββ custom_nodes/
β βββ ComfyUI-Manager/
β βββ ComfyUI-IPAdapter-Plus/
β βββ [other extensions]
βββ [other ComfyUI files]
SynSpill Integration
After ComfyUI is set up:
- Clone the SynSpill repository
- Copy the provided ComfyUI workflows to your ComfyUI directory
- Configure the data paths in the workflow files
- Run the synthetic data generation pipeline
Data Directory
This directory contains datasets and annotations for the SynSpill project.
Structure
synthetic/- Generated synthetic spill images and annotationsreal/- Real-world industrial CCTV footage (test set)annotations/- Ground truth labels and bounding boxes
Synthetic Data
The synthetic dataset is generated using our AnomalInfusion pipeline:
- Stable Diffusion XL for base image generation
- IP adapters for style conditioning
- Inpainting for precise spill placement
Citation
If you use this data in your research, please cite our ICCV 2025 paper.
SynSpill Data Directory
This directory contains datasets, annotations, and workflow configurations for the SynSpill project - a comprehensive dataset for industrial spill detection and synthesis.
Directory Structure
data/
βββ README.md # This file
βββ generation_workflow.json # ComfyUI workflow for synthetic image generation
βββ inpainting_workflow.json # ComfyUI workflow for inpainting operations
βββ release/ # Full dataset release
β βββ annotation_masks/ # Binary masks for spill regions (PNG format)
β βββ annotations/ # Ground truth annotations and metadata
β βββ generated_images/ # Complete set of synthetic spill images
βββ samples/ # Sample data for preview and testing
βββ annotation_masks/ # Sample binary masks
βββ generated_images/ # Sample synthetic images
βββ inpainted_images/ # Sample inpainted results
Dataset Contents
Release Dataset (release/)
- Generated Images: High-quality synthetic industrial spill scenarios
- Annotation Masks: Pixel-perfect binary masks identifying spill regions
- Annotations: Structured metadata including bounding boxes, class labels, and scene descriptions
Sample Dataset (samples/)
A subset of the full dataset for quick evaluation and testing purposes, containing:
- Representative examples from each category
- Various spill types and industrial environments
- Both generated and inpainted image samples
Workflow Configurations
generation_workflow.json: ComfyUI workflow for generating base synthetic images using Stable Diffusion XLinpainting_workflow.json: ComfyUI workflow for precise spill placement and inpainting operations
Synthetic Data Generation
The synthetic dataset is created using our AnomalInfusion pipeline:
- Base Generation: Stable Diffusion XL creates industrial environment images
- Style Conditioning: IP adapters ensure consistent visual style across scenes
- Spill Synthesis: Controlled inpainting places realistic spills in specified locations
- Mask Generation: Automated creation of precise segmentation masks
Usage
The data is organized for direct use with computer vision frameworks:
- Images are in standard formats (PNG/JPG)
- Masks are binary images (0 = background, 255 = spill)
- Annotations follow standard object detection formats
Citation
If you use this dataset in your research, please cite our ICCV 2025 paper:
@inproceedings{baranwal2025synspill,
title={SynSpill: Improved Industrial Spill Detection With Synthetic Data},
author={Baranwal, Aaditya and Bhatia, Guneet and Mueez, Abdul and Voelker, Jason and Vyas, Shruti},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision - Workshops (ICCV-W)},
year={2025}
}
Troubleshooting
Common Issues
- CUDA out of memory: Reduce batch size or use model offloading
- Missing models: Ensure all models are downloaded and placed in correct directories
- Extension conflicts: Check ComfyUI Manager for compatibility issues
Performance Optimization
- Use
--lowvramflag if you have limited GPU memory - Consider using
--cpufor CPU-only inference (slower) - Enable model offloading for better memory management
- Downloads last month
- 60