π Introduction
This repository contains the pre-trained weights for the paper "I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks", which has been accepted for Oral Presentation at AAAI 2026.
I2E is a pioneering framework that bridges the data scarcity gap in neuromorphic computing. By simulating microsaccadic eye movements via highly parallelized convolution, I2E converts static images into high-fidelity event streams in real-time (>300x faster than prior methods).
β¨ Key Highlights
- SOTA Performance: Achieves 60.50% top-1 accuracy on Event-based ImageNet.
- Sim-to-Real Transfer: Pre-training on I2E data enables 92.5% accuracy on real-world CIFAR10-DVS, setting a new benchmark.
- Real-Time Conversion: Enables on-the-fly data augmentation for deep SNN training.
π Model Zoo & Results
We provide pre-trained models for I2E-CIFAR and I2E-ImageNet. You can download the .pth files directly from the Files and versions tab in this repository.
| Target Dataset | Architecture | Method | Top-1 Acc |
|---|---|---|---|
| CIFAR10-DVS (Real) |
MS-ResNet18 | Baseline | 65.6% |
| MS-ResNet18 | Transfer-I | 83.1% | |
| MS-ResNet18 | Transfer-II (Sim-to-Real) | 92.5% | |
| I2E-CIFAR10 | MS-ResNet18 | Baseline-I | 85.07% |
| MS-ResNet18 | Baseline-II | 89.23% | |
| MS-ResNet18 | Transfer-I | 90.86% | |
| I2E-CIFAR100 | MS-ResNet18 | Baseline-I | 51.32% |
| MS-ResNet18 | Baseline-II | 60.68% | |
| MS-ResNet18 | Transfer-I | 64.53% | |
| I2E-ImageNet | MS-ResNet18 | Baseline-I | 48.30% |
| MS-ResNet18 | Baseline-II | 57.97% | |
| MS-ResNet18 | Transfer-I | 59.28% | |
| MS-ResNet34 | Baseline-II | 60.50% |
Method Legend:
- Baseline-I: Training from scratch with minimal augmentation.
- Baseline-II: Training from scratch with full augmentation.
- Transfer-I: Fine-tuning from Static ImageNet (or I2E-ImageNet for CIFAR targets).
- Transfer-II: Fine-tuning from I2E-CIFAR10.
ποΈ Visualization
Below is the visualization of the I2E conversion process. We illustrate the high-fidelity conversion from static RGB images to dynamic event streams.
More than 200 additional visualization comparisons can be found in Visualization.md.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
π» Usage
This repository hosts the model weights only.
For the I2E dataset generation code, training scripts, and detailed usage instructions, please refer to our official GitHub repository.
To generate the datasets (I2E-CIFAR10, I2E-CIFAR100, I2E-ImageNet) yourself using the I2E algorithm, please follow the instructions in the GitHub README.
The download address for the datasets generated by the I2E algorithm is as follows.
π Citation
If you find this work or the models useful, please cite our AAAI 2026 paper:
@article{ma2025i2e,
title={I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks},
author={Ma, Ruichen and Meng, Liwei and Qiao, Guanchao and Ning, Ning and Liu, Yang and Hu, Shaogang},
journal={arXiv preprint arXiv:2511.08065},
year={2025}
}
- Downloads last month
- 9







