UESTC-BICS commited on
Commit
edfd5e0
Β·
verified Β·
1 Parent(s): 35b13f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -117
README.md CHANGED
@@ -1777,16 +1777,15 @@ configs:
1777
  license: mit
1778
  task_categories:
1779
  - image-classification
 
1780
  tags:
1781
  - neuromorphic
1782
  - snn
1783
- - spiking-neural-networks
1784
- - event-based-vision
1785
  - dvs
1786
-
1787
  - biology
1788
  - pytorch
1789
-
1790
  - imagenet
1791
  - cifar10
1792
  - cifar100
@@ -1796,6 +1795,8 @@ tags:
1796
  - fashionmnist
1797
  - mini-imagenet
1798
  pretty_name: I2E Neuromorphic Dataset
 
 
1799
  ---
1800
 
1801
 
@@ -1814,62 +1815,167 @@ pretty_name: I2E Neuromorphic Dataset
1814
 
1815
  ## πŸš€ Introduction
1816
 
1817
- This repository contains the **I2E-Datasets** for the paper **"I2E: Real-Time Image-to-Event Conversion for High-Performance Spiking Neural Networks"**, which has been accepted for **Oral Presentation at AAAI 2026**.
1818
 
1819
- **I2E** is a pioneering framework that bridges the data scarcity gap in neuromorphic computing. By simulating microsaccadic eye movements via highly parallelized convolution, I2E converts static images into high-fidelity event streams in real-time (>300x faster than prior methods).
1820
 
1821
- ### ✨ Key Highlights
1822
- * **SOTA Performance**: Achieves **60.50%** top-1 accuracy on Event-based ImageNet.
1823
- * **Sim-to-Real Transfer**: Pre-training on I2E data enables **92.5%** accuracy on real-world CIFAR10-DVS, setting a new benchmark.
1824
- * **Real-Time Conversion**: Enables on-the-fly data augmentation for deep SNN training.
1825
 
 
1826
 
1827
- ## πŸ’» Usage
1828
- The generated I2E-Datasets are provided in the [**Files and versions**](https://huggingface.co/datasets/UESTC-BICS/I2E/tree/main) section.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1829
 
1830
- ### πŸš€ Quick Start (Copy & Paste)
 
 
 
 
 
1831
 
1832
- You don't need to manually handle the bit-unpacking. Just use the following snippet to load the data directly into PyTorch tensors.
 
 
 
 
 
 
 
 
 
 
 
 
1833
 
1834
  ```python
1835
- import os
1836
- import torch
1837
- from tqdm import tqdm
1838
- from load_i2e_datasets import I2E_Dataset
1839
- os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' # Use HF mirror server in some regions
1840
 
1841
- model_cache_path = 'Your path to model cache'
1842
- load_datasets = 'I2E-CIFAR10'
 
 
 
 
 
 
 
1843
 
1844
- train_dataset = I2E_Dataset(model_cache_path, load_datasets, split='train', transform=None, target_transform=None)
1845
- val_dataset = I2E_Dataset(model_cache_path, load_datasets, split='validation', transform=None, target_transform=None)
1846
- print(f"Train samples: {len(train_dataset)}, Validation samples: {len(val_dataset)}")
1847
 
1848
- # You can create dataloaders according to your needs
1849
- train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=256, shuffle=False, num_workers=32, pin_memory=False)
1850
- val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=256, shuffle=False, num_workers=32, pin_memory=False)
1851
- ```
1852
 
 
1853
 
 
 
 
 
 
 
1854
 
1855
- This repository hosts the **datasets only**.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1856
 
1857
- For the **I2E dataset generation code**, **training scripts**, and detailed usage instructions, please refer to our official GitHub repository.
 
 
 
 
 
 
 
 
1858
 
1859
- To generate the I2E-Datasets yourself using the I2E algorithm, please follow the instructions in the GitHub README.
 
1860
 
1861
- [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/Ruichen0424/I2E)
 
 
 
 
 
 
 
 
1862
 
 
 
 
 
 
 
1863
 
1864
- ## πŸ† Model Zoo & Results
 
 
 
 
1865
 
1866
- We provide pre-trained models for **I2E-CIFAR** and **I2E-ImageNet**. You can download the `.pth` files directly from the [**Files and versions**](https://huggingface.co/Ruichen0424/I2E/tree/main) tab in model repository.
 
1867
 
1868
- [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Models-FFD21E?style=flat-square&logo=huggingface&logoColor=black)](https://huggingface.co/Ruichen0424/I2E)
 
 
 
 
 
 
 
1869
 
1870
  <table border="1">
1871
  <tr>
1872
- <th>Target Dataset</th>
1873
  <th align="center">Architecture</th>
1874
  <th align="center">Method</th>
1875
  <th align="center">Top-1 Acc</th>
@@ -1891,90 +1997,12 @@ We provide pre-trained models for **I2E-CIFAR** and **I2E-ImageNet**. You can do
1891
  <td align="center" style="vertical-align: middle;">Transfer-II (Sim-to-Real)</td>
1892
  <td align="center" style="vertical-align: middle;"><strong>92.5%</strong></td>
1893
  </tr>
1894
- <!-- I2E-CIFAR10 -->
1895
- <tr>
1896
- <td rowspan="3" align="center" style="vertical-align: middle;"><strong>I2E-CIFAR10</strong></td>
1897
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1898
- <td align="center" style="vertical-align: middle;">Baseline-I</td>
1899
- <td align="center" style="vertical-align: middle;">85.07%</td>
1900
- </tr>
1901
- <tr>
1902
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1903
- <td align="center" style="vertical-align: middle;">Baseline-II</td>
1904
- <td align="center" style="vertical-align: middle;">89.23%</td>
1905
- </tr>
1906
- <tr>
1907
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1908
- <td align="center" style="vertical-align: middle;">Transfer-I</td>
1909
- <td align="center" style="vertical-align: middle;"><strong>90.86%</strong></td>
1910
- </tr>
1911
- <!-- I2E-CIFAR100 -->
1912
- <tr>
1913
- <td rowspan="3" align="center" style="vertical-align: middle;"><strong>I2E-CIFAR100</strong></td>
1914
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1915
- <td align="center" style="vertical-align: middle;">Baseline-I</td>
1916
- <td align="center" style="vertical-align: middle;">51.32%</td>
1917
- </tr>
1918
- <tr>
1919
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1920
- <td align="center" style="vertical-align: middle;">Baseline-II</td>
1921
- <td align="center" style="vertical-align: middle;">60.68%</td>
1922
- </tr>
1923
- <tr>
1924
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1925
- <td align="center" style="vertical-align: middle;">Transfer-I</td>
1926
- <td align="center" style="vertical-align: middle;"><strong>64.53%</strong></td>
1927
- </tr>
1928
- <!-- I2E-ImageNet -->
1929
- <tr>
1930
- <td rowspan="4" align="center" style="vertical-align: middle;"><strong>I2E-ImageNet</strong></td>
1931
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1932
- <td align="center" style="vertical-align: middle;">Baseline-I</td>
1933
- <td align="center" style="vertical-align: middle;">48.30%</td>
1934
- </tr>
1935
- <tr>
1936
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1937
- <td align="center" style="vertical-align: middle;">Baseline-II</td>
1938
- <td align="center" style="vertical-align: middle;">57.97%</td>
1939
- </tr>
1940
- <tr>
1941
- <td align="center" style="vertical-align: middle;">MS-ResNet18</td>
1942
- <td align="center" style="vertical-align: middle;">Transfer-I</td>
1943
- <td align="center" style="vertical-align: middle;">59.28%</td>
1944
- </tr>
1945
- <tr>
1946
- <td align="center" style="vertical-align: middle;">MS-ResNet34</td>
1947
- <td align="center" style="vertical-align: middle;">Baseline-II</td>
1948
- <td align="center" style="vertical-align: middle;"><strong>60.50%</strong></td>
1949
- </tr>
1950
  </table>
1951
 
1952
- > **Method Legend:**
1953
- > * **Baseline-I**: Training from scratch with minimal augmentation.
1954
- > * **Baseline-II**: Training from scratch with full augmentation.
1955
- > * **Transfer-I**: Fine-tuning from Static ImageNet (or I2E-ImageNet for CIFAR targets).
1956
- > * **Transfer-II**: Fine-tuning from I2E-CIFAR10.
1957
-
1958
- ## πŸ‘οΈ Visualization
1959
-
1960
- Below is the visualization of the I2E conversion process. We illustrate the high-fidelity conversion from static RGB images to dynamic event streams.
1961
-
1962
- More than 200 additional visualization comparisons can be found in [Visualization.md](./Visualization.md).
1963
 
1964
- <table border="0" style="width: 100%">
1965
- <tr>
1966
- <td width="25%" align="center"><img src="./assets/original_1.jpg" alt="Original 1" style="width:100%"></td>
1967
- <td width="25%" align="center"><img src="./assets/converted_1.gif" alt="Converted 1" style="width:100%"></td>
1968
- <td width="25%" align="center"><img src="./assets/original_2.jpg" alt="Original 2" style="width:100%"></td>
1969
- <td width="25%" align="center"><img src="./assets/converted_2.gif" alt="Converted 2" style="width:100%"></td>
1970
- </tr>
1971
- <tr>
1972
- <td width="25%" align="center"><img src="./assets/original_3.jpg" alt="Original 3" style="width:100%"></td>
1973
- <td width="25%" align="center"><img src="./assets/converted_3.gif" alt="Converted 3" style="width:100%"></td>
1974
- <td width="25%" align="center"><img src="./assets/original_4.jpg" alt="Original 4" style="width:100%"></td>
1975
- <td width="25%" align="center"><img src="./assets/converted_4.gif" alt="Converted 4" style="width:100%"></td>
1976
- </tr>
1977
- </table>
1978
 
1979
  ## πŸ“œ Citation
1980
 
 
1777
  license: mit
1778
  task_categories:
1779
  - image-classification
1780
+ - video-classification
1781
  tags:
1782
  - neuromorphic
1783
  - snn
1784
+ - spiking neural networks
1785
+ - event
1786
  - dvs
 
1787
  - biology
1788
  - pytorch
 
1789
  - imagenet
1790
  - cifar10
1791
  - cifar100
 
1795
  - fashionmnist
1796
  - mini-imagenet
1797
  pretty_name: I2E Neuromorphic Dataset
1798
+ language:
1799
+ - en
1800
  ---
1801
 
1802
 
 
1815
 
1816
  ## πŸš€ Introduction
1817
 
1818
+ This repository hosts the **I2E-Datasets**, a comprehensive suite of neuromorphic datasets generated using the **I2E (Image-to-Event)** framework. This work has been accepted for **Oral Presentation at AAAI 2026**.
1819
 
1820
+ **I2E** bridges the data scarcity gap in **Neuromorphic Computing** and **Spiking Neural Networks (SNNs)**. By simulating microsaccadic eye movements via highly parallelized convolution, I2E converts static images into high-fidelity event streams in real-time (>300x faster than prior methods).
1821
 
1822
+ ## πŸ‘οΈ Visualization
 
 
 
1823
 
1824
+ The following comparisons illustrate the high-fidelity conversion from static RGB images to dynamic event streams using I2E.
1825
 
1826
+ <table border="0" style="width: 100%">
1827
+ <tr>
1828
+ <td width="25%" align="center"><img src="./assets/original_1.jpg" alt="Original 1" style="width:100%"></td>
1829
+ <td width="25%" align="center"><img src="./assets/converted_1.gif" alt="Converted 1" style="width:100%"></td>
1830
+ <td width="25%" align="center"><img src="./assets/original_2.jpg" alt="Original 2" style="width:100%"></td>
1831
+ <td width="25%" align="center"><img src="./assets/converted_2.gif" alt="Converted 2" style="width:100%"></td>
1832
+ </tr>
1833
+ <tr>
1834
+ <td width="25%" align="center"><img src="./assets/original_3.jpg" alt="Original 3" style="width:100%"></td>
1835
+ <td width="25%" align="center"><img src="./assets/converted_3.gif" alt="Converted 3" style="width:100%"></td>
1836
+ <td width="25%" align="center"><img src="./assets/original_4.jpg" alt="Original 4" style="width:100%"></td>
1837
+ <td width="25%" align="center"><img src="./assets/converted_4.gif" alt="Converted 4" style="width:100%"></td>
1838
+ </tr>
1839
+ </table>
1840
+
1841
+ *More visualization comparisons can be found in [Visualization.md](./Visualization.md).*
1842
+
1843
+ ## πŸ“¦ Dataset Catalog
1844
+
1845
+ We provide a comprehensive collection of standard benchmarks converted into event streams via the I2E algorithm.
1846
+
1847
+ ### 1. Standard Benchmarks (Classification)
1848
+ | Config Name | Original Source | Resolution $(H, W)$ | I2E Ratio | Event Rate | Samples (Train/Val) |
1849
+ | :--- | :--- | :--- | :--- | :--- | :--- |
1850
+ | **`I2E-CIFAR10`** | CIFAR-10 | 128 x 128 | 0.07 | 5.86% | 50k / 10k |
1851
+ | **`I2E-CIFAR100`** | CIFAR-100 | 128 x 128 | 0.07 | 5.76% | 50k / 10k |
1852
+ | **`I2E-ImageNet`** | ILSVRC2012 | 224 x 224 | 0.12 | 6.66% | 1.28M / 50k |
1853
 
1854
+ ### 2. Transfer Learning & Fine-grained
1855
+ | Config Name | Original Source | Resolution $(H, W)$ | I2E Ratio | Event Rate | Samples |
1856
+ | :--- | :--- | :--- | :--- | :--- | :--- |
1857
+ | **`I2E-Caltech101`** | Caltech-101 | 224 x 224 | 0.12 | 6.25% | 8.677k |
1858
+ | **`I2E-Caltech256`** | Caltech-256 | 224 x 224 | 0.12 | 6.04% | 30.607k |
1859
+ | **`I2E-Mini-ImageNet`**| Mini-ImageNet | 224 x 224 | 0.12 | 6.65% | 60k |
1860
 
1861
+ ### 3. Small Scale / Toy
1862
+ | Config Name | Original Source | Resolution $(H, W)$ | I2E Ratio | Event Rate | Samples |
1863
+ | :--- | :--- | :--- | :--- | :--- | :--- |
1864
+ | **`I2E-MNIST`** | MNIST | 32 x 32 | 0.10 | 9.56% | 60k / 10k |
1865
+ | **`I2E-FashionMNIST`** | Fashion-MNIST | 32 x 32 | 0.15 | 10.76% | 60k / 10k |
1866
+
1867
+ > πŸ”œ **Coming Soon:** Object Detection and Semantic Segmentation datasets.
1868
+
1869
+ ## πŸ› οΈ Preprocessing Protocol
1870
+
1871
+ To ensure reproducibility, we specify the exact data augmentation pipeline applied to the static images **before** I2E conversion.
1872
+
1873
+ The `(H, W)` in the code below corresponds to the "Resolution" column in the Dataset Catalog above.
1874
 
1875
  ```python
1876
+ from torchvision.transforms import v2
 
 
 
 
1877
 
1878
+ # Standard Pre-processing Pipeline used for I2E generation
1879
+ transform_train = v2.Compose([
1880
+ # Ensure 3-channel RGB (crucial for grayscale datasets like MNIST)
1881
+ v2.Lambda(lambda x: x.convert('RGB')),
1882
+ v2.PILToTensor(),
1883
+ v2.Resize((H, W), interpolation=v2.InterpolationMode.BICUBIC),
1884
+ v2.ToDtype(torch.float32, scale=True),
1885
+ ])
1886
+ ````
1887
 
1888
+ ## πŸ’» Usage
 
 
1889
 
1890
+ ### πŸš€ Quick Start
 
 
 
1891
 
1892
+ You **do not** need to download any extra scripts. Just copy the code below. It handles the binary unpacking (converting Parquet bytes to PyTorch Tensors) automatically.
1893
 
1894
+ ```python
1895
+ import io
1896
+ import torch
1897
+ import numpy as np
1898
+ from datasets import load_dataset
1899
+ from torch.utils.data import Dataset, DataLoader
1900
 
1901
+ # ==================================================================
1902
+ # 1. Core Decoding Function (Handles the binary packing)
1903
+ # ==================================================================
1904
+ def unpack_event_data(item, use_io=True):
1905
+ """
1906
+ Decodes the custom binary format:
1907
+ Header (8 bytes) -> Shape (T, C, H, W) -> Body (Packed Bits)
1908
+ """
1909
+ if use_io:
1910
+ with io.BytesIO(item['data']) as f:
1911
+ raw_data = np.load(f)
1912
+ else:
1913
+ raw_data = np.load(item)
1914
+
1915
+ header_size = 4 * 2 # Parse Header (First 8 bytes for 4 uint16 shape values)
1916
+ shape_header = raw_data[:header_size].view(np.uint16)
1917
+ original_shape = tuple(shape_header) # Returns (T, C, H, W)
1918
+
1919
+ packed_body = raw_data[header_size:] # Parse Body & Bit-unpacking
1920
+ unpacked = np.unpackbits(packed_body)
1921
+
1922
+ num_elements = np.prod(original_shape) # Extract valid bits (Handle padding)
1923
+ event_flat = unpacked[:num_elements]
1924
+ event_data = event_flat.reshape(original_shape).astype(np.float32).copy()
1925
+
1926
+ return torch.from_numpy(event_data)
1927
 
1928
+ # ==================================================================
1929
+ # 2. Dataset Wrapper
1930
+ # ==================================================================
1931
+ class I2E_Dataset(Dataset):
1932
+ def __init__(self, cache_dir, config_name, split='train', transform=None, target_transform=None):
1933
+ print(f"πŸš€ Loading {config_name} [{split}] from Hugging Face...")
1934
+ self.ds = load_dataset('UESTC-BICS/I2E', config_name, split=split, cache_dir=cache_dir, keep_in_memory=False)
1935
+ self.transform = transform
1936
+ self.target_transform = target_transform
1937
 
1938
+ def __len__(self):
1939
+ return len(self.ds)
1940
 
1941
+ def __getitem__(self, idx):
1942
+ item = self.ds[idx]
1943
+ event = unpack_event_data(item)
1944
+ label = item['label']
1945
+ if self.transform:
1946
+ event = self.transform(event)
1947
+ if self.target_transform:
1948
+ label = self.target_transform(label)
1949
+ return event, label
1950
 
1951
+ # ==================================================================
1952
+ # 3. Run Example
1953
+ # ==================================================================
1954
+ if __name__ == "__main__":
1955
+ import os
1956
+ os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' # Use HF mirror server in some regions
1957
 
1958
+ DATASET_NAME = 'I2E-CIFAR10' # Choose your config: 'I2E-CIFAR10', 'I2E-ImageNet', etc.
1959
+ MODEL_PATH = 'Your cache path here' # e.g., './hf_datasets_cache/'
1960
+
1961
+ train_dataset = I2E_Dataset(MODEL_PATH, DATASET_NAME, split='train')
1962
+ val_dataset = I2E_Dataset(MODEL_PATH, DATASET_NAME, split='validation')
1963
 
1964
+ train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=32, persistent_workers=True)
1965
+ val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False, num_workers=32, persistent_workers=True)
1966
 
1967
+ events, labels = next(iter(train_loader))
1968
+ print(f"βœ… Loaded Batch Shape: {events.shape}") # Expect: [32, T, 2, H, W]
1969
+ print(f"βœ… Labels: {labels}")
1970
+ ```
1971
+
1972
+ ## πŸ† Results (SOTA)
1973
+
1974
+ Our I2E-pretraining sets new benchmarks for Sim-to-Real transfer on **CIFAR10-DVS**.
1975
 
1976
  <table border="1">
1977
  <tr>
1978
+ <th>Dataset</th>
1979
  <th align="center">Architecture</th>
1980
  <th align="center">Method</th>
1981
  <th align="center">Top-1 Acc</th>
 
1997
  <td align="center" style="vertical-align: middle;">Transfer-II (Sim-to-Real)</td>
1998
  <td align="center" style="vertical-align: middle;"><strong>92.5%</strong></td>
1999
  </tr>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2000
  </table>
2001
 
2002
+ *For full results and model weights, please visit our [GitHub Repo](https://github.com/Ruichen0424/I2E).*
 
 
 
 
 
 
 
 
 
 
2003
 
2004
+ [![GitHub](https://img.shields.io/badge/GitHub-Repository-black?logo=github)](https://github.com/Ruichen0424/I2E)
2005
+ [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Models-FFD21E?style=flat-square&logo=huggingface&logoColor=black)](https://huggingface.co/Ruichen0424/I2E)
 
 
 
 
 
 
 
 
 
 
 
 
2006
 
2007
  ## πŸ“œ Citation
2008