The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 60, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 2325, in read_schema
file = ParquetFile(
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 318, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SyMuRBench Datasets and Precomputed Features
This repository contains datasets and precomputed features for SyMuRBench, a benchmark for symbolic music understanding models. It includes metadata and MIDI files for multiple classification and retrieval tasks, along with pre-extracted music21 and jSymbolic features.
You can install and use the full pipeline via: π https://github.com/Mintas/SyMuRBench
Overview
SyMuRBench supports evaluation across diverse symbolic music tasks, including composer, genre, emotion, and instrument classification, as well as score-performance retrieval. This Hugging Face dataset provides:
- Dataset metadata (CSV files)
- MIDI files organized by task
- Precomputed music21 and jSymbolic features
- Configuration-ready structure for immediate use in benchmarking
Tasks Description
| Task Name | Source Dataset | Task Type | # of Classes | # of Files | Default Metrics |
|---|---|---|---|---|---|
| ComposerClassificationASAP | ASAP | Multiclass Classification | 7 | 197 | weighted f1 score, balanced accuracy |
| GenreClassificationMMD | MetaMIDI | Multiclass Classification | 7 | 2,795 | weighted f1 score, balanced accuracy |
| GenreClassificationWMTX | WikiMT-X | Multiclass Classification | 8 | 985 | weighted f1 score, balanced accuracy |
| EmotionClassificationEMOPIA | Emopia | Multiclass Classification | 4 | 191 | weighted f1 score, balanced accuracy |
| EmotionClassificationMIREX | MIREX | Multiclass Classification | 5 | 163 | weighted f1 score, balanced accuracy |
| InstrumentDetectionMMD | MetaMIDI | Multilabel Classification | 128 | 4,675 | weighted f1 score |
| ScorePerformanceRetrievalASAP | ASAP | Retrieval | - | 438 (219 pairs) | R@1, R@5, R@10, Median Rank |
Precomputed Features
Precomputed features are available in the data/features/ folder:
music21_full_dataset.parquetjsymbolic_full_dataset.parquet
Each file contains a unified table with:
midi_file: Filename of the MIDItask: Corresponding task nameE_0toE_N: Feature vector
Example
| midi_file | task | E_0 | E_1 | ... | E_672 | E_673 |
|---|---|---|---|---|---|---|
| Q1_0vLPYiPN7qY_1.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
| Q1_4dXC1cC7crw_0.mid | EmotionClassificationEMOPIA | 0.0 | 0.0 | ... | 0.0 | 0.0 |
File Structure
The dataset is distributed as a ZIP archive:
data/datasets.zip
After extraction, the structure is:
datasets/
βββ composer_and_retrieval_datasets/
β βββ metadata_composer_dataset.csv
β βββ metadata_retrieval_dataset.csv
β βββ ... (MIDI files organized in subfolders)
βββ genre_dataset/
β βββ metadata_genre_dataset.csv
β βββ midis/
βββ wikimtx_dataset/
β βββ metadata_wikimtx_dataset.csv
β βββ midis/
βββ emopia_dataset/
β βββ metadata_emopia_dataset.csv
β βββ midis/
βββ mirex_dataset/
β βββ metadata_mirex_dataset.csv
β βββ midis/
βββ instrument_dataset/
βββ metadata_instrument_dataset.csv
βββ midis/
- CSV files: Contain
filenameandlabel(or pair info for retrieval). - MIDI files: Used as input for feature extractors.
How to Use
You can download and extract everything using the built-in utility:
from symurbench.utils import load_datasets
load_datasets(output_folder="./data", load_features=True)
This will:
- Download datasets.zip and extract it
- Optionally download precomputed features
- Update config paths automatically
License
This dataset is released under the MIT License.
Citation
If you use SyMuRBench in your work, please cite:
@inproceedings{symurbench2025,
author = {Petr Strepetov and Dmitrii Kovalev},
title = {SyMuRBench: Benchmark for Symbolic Music Representations},
booktitle = {Proceedings of the 3rd International Workshop on Multimedia Content Generation and Evaluation: New Methods and Practice (McGE '25)},
year = {2025},
pages = {9},
publisher = {ACM},
address = {Dublin, Ireland},
doi = {10.1145/3746278.3759392}
}
- Downloads last month
- 12