Unified Multimodal Sensor Dataset [DRAFT]
A unified multimodal dataset designed to accelerate research in multisensory perception, embodied AI, and sensor fusion.
Dataset Description
Modalities:
- Visual
- Olfactory
- Audio
- Inertial
- Environmental (temperature, humidity, barometric pressure, lumosity etc.)
Data Format: All sensor streams are synchronized and stored in a standardized JSON / NoSQL format. Metadata includes timestamps, environment labels, and calibration parameters.
Total Samples: X entries
Environments: Indoor, outdoor, lab-controlled, and natural settings
Running Locally
To explore the dataset locally, follow the steps below:
1. Clone the Repository
git clone https://github.com/KordelFranceTech/MultimodalDataset.git
cd MultimodalDataset
2. Create a Virtual Environment
python -m venv env
source env/bin/activate # On Windows: .\env\Scripts\activate
3. Install Dependencies
pip install -r requirements.txt
4. Run Sample Exploration Script
python scripts/visualize_data.py
Directory Structure
MultimodalDataset/
βββ data/ # Example data samples
βββ scripts/ # Data loading and visualization tools
βββ notebooks/ # Colab-ready notebooks
βββ metadata/ # Sensor calibration, annotations
βββ requirements.txt # Python dependencies
βββ LICENSE # Licensing terms of this repository
βββ README.md # Overview of repository contributions and usage
Citation
If you use this dataset in your research, please cite it as follows:
@article{multimodal_dataset,
title={A Multimodal Dataset for Sensor Fusion in Vision, Olfaction, Audio, and Inertial Domains},
author={Rohith Peddi, Kordel France, Likhith Aal},
journal={Journal or Conference Name},
year={2025},
url={https://example.com/paper}
}
License
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.