Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 10 new columns ({'8062_1.1.h5', '2625_1.0.h5', '7125_0.7.h5', '6281_1.2.h5', '8062_0.7.h5', '3562_1.1.h5', '8062_1.0.h5', '3562_0.9.h5', '8062_0.9.h5', '4406_0.7.h5'}) and 2 missing columns ({'sim_id', 'time_id'}).

This happened while the json dataset builder was generating data using

hf://datasets/AI4Science-WestlakeU/RealPDEBench/controlled_cylinder/in_dist_test_params_real.json (at revision 024e1ad9f73090713e3f6a0742276a2d0d8820a6)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              2625_1.0.h5: list<item: double>
                child 0, item: double
              3562_0.9.h5: list<item: double>
                child 0, item: double
              3562_1.1.h5: list<item: double>
                child 0, item: double
              4406_0.7.h5: list<item: double>
                child 0, item: double
              6281_1.2.h5: list<item: double>
                child 0, item: double
              7125_0.7.h5: list<item: double>
                child 0, item: double
              8062_0.7.h5: list<item: double>
                child 0, item: double
              8062_0.9.h5: list<item: double>
                child 0, item: double
              8062_1.0.h5: list<item: double>
                child 0, item: double
              8062_1.1.h5: list<item: double>
                child 0, item: double
              to
              {'sim_id': Value('string'), 'time_id': Value('int64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1334, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 911, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 10 new columns ({'8062_1.1.h5', '2625_1.0.h5', '7125_0.7.h5', '6281_1.2.h5', '8062_0.7.h5', '3562_1.1.h5', '8062_1.0.h5', '3562_0.9.h5', '8062_0.9.h5', '4406_0.7.h5'}) and 2 missing columns ({'sim_id', 'time_id'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/AI4Science-WestlakeU/RealPDEBench/controlled_cylinder/in_dist_test_params_real.json (at revision 024e1ad9f73090713e3f6a0742276a2d0d8820a6)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

sim_id
string
time_id
int64
20NH3_1.1.h5
1,687
40NH3_0.85.h5
477
0NH3_0.85.h5
376
40NH3_1.1.h5
826
20NH3_1.1.h5
1,056
80NH3_1.25.h5
1,901
20NH3_0.9.h5
1,670
0NH3_0.85.h5
36
80NH3_0.9.h5
1,426
20NH3_1.25.h5
1,549
20NH3_0.9.h5
1,076
0NH3_1.2.h5
1,767
40NH3_1.1.h5
1,058
20NH3_1.1.h5
211
80NH3_1.15.h5
30
20NH3_1.05.h5
1,253
0NH3_1.2.h5
1,443
40NH3_1.h5
96
40NH3_1.15.h5
1,256
20NH3_1.25.h5
196
60NH3_0.95.h5
127
80NH3_1.2.h5
833
20NH3_0.85.h5
1,924
0NH3_1.15.h5
1,607
0NH3_1.25.h5
1,886
0NH3_1.15.h5
1,611
20NH3_1.05.h5
1,230
20NH3_1.3.h5
958
80NH3_1.25.h5
840
20NH3_0.75.h5
937
40NH3_1.2.h5
1,761
20NH3_1.1.h5
196
80NH3_0.9.h5
58
0NH3_1.15.h5
1,646
40NH3_0.9.h5
236
20NH3_1.05.h5
1,153
20NH3_0.75.h5
1,514
0NH3_1.h5
734
20NH3_1.25.h5
1,613
20NH3_0.75.h5
1,756
0NH3_1.3.h5
611
40NH3_1.15.h5
856
20NH3_1.1.h5
378
20NH3_1.25.h5
842
0NH3_0.85.h5
299
20NH3_1.05.h5
892
80NH3_1.25.h5
119
60NH3_0.95.h5
1,376
0NH3_1.15.h5
1,624
20NH3_1.25.h5
577
80NH3_0.9.h5
1,491
0NH3_1.1.h5
338
0NH3_1.3.h5
1,348
20NH3_1.1.h5
379
20NH3_1.1.h5
696
0NH3_1.h5
1,492
20NH3_1.1.h5
1,035
40NH3_1.15.h5
795
40NH3_1.15.h5
1,096
0NH3_0.85.h5
833
0NH3_1.2.h5
1,736
80NH3_1.2.h5
1,933
0NH3_1.1.h5
1,630
0NH3_1.2.h5
1,636
20NH3_0.9.h5
1,116
20NH3_1.3.h5
249
20NH3_1.h5
977
0NH3_0.85.h5
1,012
40NH3_1.h5
121
0NH3_1.h5
1,896
40NH3_1.h5
1,573
40NH3_0.85.h5
989
40NH3_1.h5
1,608
0NH3_1.2.h5
959
80NH3_1.25.h5
806
40NH3_1.1.h5
1,653
40NH3_0.85.h5
935
80NH3_1.2.h5
181
40NH3_1.2.h5
202
0NH3_1.1.h5
118
0NH3_1.2.h5
1,860
0NH3_0.9.h5
934
0NH3_1.3.h5
378
0NH3_1.h5
469
40NH3_1.1.h5
105
40NH3_0.85.h5
187
80NH3_1.15.h5
341
20NH3_0.85.h5
1,253
40NH3_0.9.h5
1,641
0NH3_1.25.h5
1,564
60NH3_0.95.h5
852
20NH3_1.h5
1,153
40NH3_1.15.h5
371
20NH3_0.9.h5
423
20NH3_1.05.h5
693
0NH3_1.25.h5
562
40NH3_0.9.h5
1,338
0NH3_1.2.h5
1,272
80NH3_1.25.h5
434
40NH3_1.1.h5
1,347
End of preview.

RealPDEBench logo

RealPDEBench

HF Dataset arXiv Website & Docs Codebase License: CC BY-NC 4.0

RealPDEBench is a benchmark of paired real-world measurements and matched numerical simulations for complex physical systems. It is designed for spatiotemporal forecasting and sim-to-real transfer evaluation on real data.

This Hub repository (AI4Science-WestlakeU/RealPDEBench) is the release repo for RealPDEBench.

RealPDEBench overview figure

Figure 1. RealPDEBench provides paired real-world measurements and matched numerical simulations for sim-to-real evaluation.

What makes RealPDEBench different?

  • Paired real + simulated data: each scenario provides experimental measurements and corresponding CFD/LES simulations.
  • Real-world evaluation: models are evaluated on real trajectories to quantify the sim-to-real gap.
  • Multi-modal mismatch: simulations include additional unmeasured modalities (e.g., pressure, species fields), enabling modality-masking and transfer strategies.

Data sources (high level)

  • Fluid systems (cylinder, controlled_cylinder, fsi, foil):
    • Real: Particle Image Velocimetry (PIV) in a circulating water tunnel
    • Sim: CFD (2D finite-volume + immersed-boundary; 3D GPU solvers depending on scenario)
  • Combustion (combustion):
    • Real: OH* chemiluminescence imaging (high-speed)
    • Sim: Large Eddy Simulation (LES) with detailed chemistry (NH3/CH4/air co-firing)

Scenarios (5)

Scenario Real data (measured) Numerical data (simulated) Frames / trajectory Spatial grid (after sub-sampling) HDF5 trajectories (real / numerical)
cylinder velocity (u,v) (u,v,p) 3990 64×128 92 / 92
controlled_cylinder (u,v) (u,v,p) (+ control params in filenames) 3990 64×128 96 / 96
fsi (u,v) (u,v,p) 2173 64×64 51 / 51
foil (u,v) (u,v,p) 3990 64×128 98 / 99
combustion OH* chemiluminescence intensity (1 channel) intensity surrogate (1) + 15 simulated fields 2001 128×128 30 / 30

Total trajectories (HDF5 files): ~735 (≈364 real + ≈368 numerical).

Physical parameter ranges (real experiments)

Scenario Key parameters (real)
cylinder Reynolds number (Re): 1800–12000
controlled_cylinder (Re): 1781–9843; control frequency (f): 0.5–1.4 Hz
fsi (Re): 3272–9068; mass ratio (m^*): 18.2–20.8
foil angle of attack (\alpha): 0°–20°; (Re): 2968–17031
combustion CH4 ratio: 20–100%; equivalence ratio (\phi): 0.75–1.3

Data format on the Hub

RealPDEBench stores complete trajectories in HuggingFace Arrow format, with separate JSON index files for train/val/test splits. This enables dynamic N_autoregressive support at runtime.

Each scenario contains:

  • Trajectory data: hf_dataset/{real,numerical}/ — Arrow files with complete time series
  • Index files: hf_dataset/{split}_index_{type}.json — maps sample indices to (sim_id, time_id)
  • test_mode metadata: {in_dist,out_dist,remain}_params_{type}.json

Repository layout:

{repo_root}/
  cylinder/
    in_dist_test_params_real.json
    out_dist_test_params_real.json
    remain_params_real.json
    in_dist_test_params_numerical.json
    out_dist_test_params_numerical.json
    remain_params_numerical.json
    hf_dataset/
      real/                           # Arrow: complete trajectories (92 files)
        data-*.arrow
        dataset_info.json
        state.json
      numerical/                      # Arrow: complete trajectories
        data-*.arrow
        dataset_info.json
        state.json
      train_index_real.json           # Index: [{"sim_id": "xxx.h5", "time_id": 0}, ...]
      val_index_real.json
      test_index_real.json
      train_index_numerical.json
      val_index_numerical.json
      test_index_numerical.json
  fsi/
    ...  (same structure)
  controlled_cylinder/
    ...  (same structure)
  foil/
    ...  (same structure)
  combustion/
    ...  (same structure)

How to download only what you need

For large data, use snapshot_download(..., allow_patterns=...) to avoid pulling the full repository.

import os
from huggingface_hub import snapshot_download
from datasets import load_from_disk

repo_id = "AI4Science-WestlakeU/RealPDEBench"
os.environ["HF_HUB_DISABLE_XET"] = "1"
local_dir = snapshot_download(
    repo_id=repo_id,
    repo_type="dataset",
    allow_patterns=["fsi/**"],  # example: download only the FSI folder
    endpoint="https://hf-mirror.com",
)

# Load trajectory data
trajectories = load_from_disk(os.path.join(local_dir, "fsi", "hf_dataset", "real"))
print(f"Loaded {len(trajectories)} trajectories")
print(trajectories[0].keys())  # sim_id, u, v, shape_t, shape_h, shape_w

Using the RealPDEBench loaders (recommended)

For automatic train/val/test splitting and dynamic N_autoregressive support, use the provided dataset loaders:

from realpdebench.data.fluid_hf_dataset import FSIHFDataset

dataset = FSIHFDataset(
    dataset_name="fsi",
    dataset_root="/path/to/data",
    dataset_type="real",
    mode="test",
    N_autoregressive=10,  # Dynamic! Works with any value
)

input_tensor, output_tensor = dataset[0]
print(f"Input shape: {input_tensor.shape}")   # (20, H, W, 2)
print(f"Output shape: {output_tensor.shape}") # (200, H, W, 2) = 20 × 10

Schema (columns)

Fluid datasets (cylinder, controlled_cylinder, fsi, foil)

  • Keys (each row = one complete trajectory):
    • sim_id (string): trajectory file name (e.g., 10031.h5)
    • u, v (bytes): float32 arrays of shape (T_full, H, W)complete time series
    • p (bytes): float32 array (T_full, H, W) (numerical splits only)
    • shape_t (int): complete trajectory length (e.g., 3990, 2173)
    • shape_h, shape_w (int): spatial dimensions

Combustion dataset (combustion)

  • Keys (each row = one complete trajectory):
    • sim_id (string): e.g., 40NH3_1.1.h5
    • observed (bytes): float32 array (T_full, H, W)complete time series
    • numerical (bytes): float32 array (T_full, H, W, 15) (numerical splits only)
    • numerical_channels (int): number of numerical channels (15)
    • shape_t (int): complete trajectory length (e.g., 2001)
    • shape_h, shape_w (int): spatial dimensions

Index files (JSON)

Each split has an index file mapping sample indices to trajectory positions:

[
  {"sim_id": "10031.h5", "time_id": 0},
  {"sim_id": "10031.h5", "time_id": 20},
  {"sim_id": "10031.h5", "time_id": 40},
  ...
]

Data size

  • Total: ~210GB across all scenarios
  • Largest shard file: ~0.5GB (well below the Hub's recommended <50GB per file)
  • Total file count: ~550 files (well below the Hub's recommended <100k files per repo)

Per-scenario totals:

Scenario real numerical Total
cylinder 23GB 34GB 57GB
controlled_cylinder 24GB 36GB 59GB
fsi 6GB 11GB 17GB
foil 24GB 37GB 61GB
combustion 1GB 15GB 16GB
Total 78GB 133GB ~210GB

Recommended benchmark protocols

RealPDEBench supports three standard training paradigms (all evaluated on real-world data):

  • Simulated training (numerical only)
  • Real-world training (real only)
  • Simulated pretraining + real finetuning

License

This dataset is released under CC BY‑NC 4.0 (non‑commercial). Please credit the authors and the benchmark paper when using the dataset.

Citation

If you find our work and/or our code useful, please cite us via:

@misc{hu2026realpdebenchbenchmarkcomplexphysical,
      title={RealPDEBench: A Benchmark for Complex Physical Systems with Real-World Data}, 
      author={Peiyan Hu and Haodong Feng and Hongyuan Liu and Tongtong Yan and Wenhao Deng and Tianrun Gao and Rong Zheng and Haoren Zheng and Chenglei Yu and Chuanrui Wang and Kaiwen Li and Zhi-Ming Ma and Dezhi Zhou and Xingcai Lu and Dixia Fan and Tailin Wu},
      year={2026},
      eprint={2601.01829},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2601.01829}, 
}

Contact

AI for Scientific Simulation and Discovery Lab, Westlake University
Maintainer: westlake-ai4s (Hugging Face)
Org: AI4Science-WestlakeU

Downloads last month
865

Paper for AI4Science-WestlakeU/RealPDEBench