Datasets:
Acknowledgment and Questions on the CineBrain Dataset
Dear Authors,
I am writing to express my sincere gratitude for your outstanding work on the CineBrain dataset, which has provided invaluable support for my research on EEG-based multimodal neural decoding.
I have encountered several questions while using the dataset (https://huggingface.co/datasets/Fudan-fMRI/CineBrain/tree/main) and would greatly appreciate your clarification:
The preprocessed EEG data has a shape of (69, 800). Since the original EEG was collected with 64 channels, are the first 64 rows EEG signals and the last 5 rows ECG signals?
Additionally, does this shape imply downsampling from 1000 Hz to 200 Hz (800 data points for 4 seconds)?
The EEG files for sub0002 and sub0003 appear identical. Could this be a data upload error?
The video data lacks corresponding audio files. Would it be possible to provide the audio data for alignment with EEG signals?
Regarding data splits: Is there a formal table mapping clips to specific episodes? We hypothesize 270 clips per episode (5400 clips/20 episodes), but confirmation is needed.
Each subject has 27000 EEG npy files, while there are only 5400 video/audio clips. What is the exact correspondence between these EEG files and the 5400 clips?
Thank you again for your contributions and for your time to address these questions. I look forward to your reply.
Best regards.
Thanks for your interest in our work. We address your questions point by point below:
- Yes β the first 64 channels correspond to EEG signals, and the last channel is ECG.
- During preprocessing, EEG and fMRI are temporally aligned. fMRI is sampled every 0.8 s, and EEG is segmented accordingly into 0.8 s windows, resulting in 800 data points per segment. The original EEG sampling rate remains 1000 Hz.
- After checking, we confirm that there were indeed some issues in the currently uploaded data. These have now been corrected.
- The audio files were previously uploaded to ModelScope and have not yet been synchronized to Hugging Face. They are available at: https://www.modelscope.cn/datasets/Jianxionggao/CineBrainVide.
In that repository, both videos and audio are segmented into 4 s clips. The videos are at 16.25 fps, which is slightly higher than those on Hugging Face. The choice of version depends on your specific use case. - For each episode, we use the first 18 minutes and segment them into 4 s clips, yielding 18 Γ 60 / 4 = 270 clips per episode.
For videos, we select the first 10 episodes from Seasons 7, 9, and 11. Video IDs follow the order 0701, 0702, β¦, 0901, 0902, β¦, 1109, 1110 (season + episode index). - Different subjects watched different sets of videos:
- Subjects 1, 2, and 6 watched Seasons 7 and 9, corresponding to video IDs 1β20 (5400 clips).
- Subjects 3, 4, and 5 watched Seasons 7 and 11, corresponding to video IDs 1β10 and 21β30 (5400 clips).
As long as the video IDs are correctly matched, EEG and fMRI data correspond one-to-one.
If you have any further questions, please feel free to discuss them with us.
Best regards,
Jianxiong