LLM Training Calibration Database
Empirical GPU training timing measurements collected to calibrate analytical (roofline-based) LLM training time estimators. The dataset captures real step times across multiple GPU architectures, model families, and configurations, along with the ratio of measured to predicted times (the correction factor).
Roofline models tend to significantly underestimate actual training time, particularly for small models on large GPUs where memory bandwidth, kernel launch overhead, and framework costs dominate. This dataset quantifies those gaps.
Dataset Contents
The dataset is stored as Parquet files (one per database table) plus a metadata.json file.
Tables
| File | Description |
|---|---|
calibration_runs.parquet |
Individual timed training runs (step times, GPU memory, utilization) |
calibration_stats.parquet |
Aggregated correction factors per (host, model, batch_size, seq_len) |
dtype_calibration.parquet |
Speedup factors across FP32 / FP16 / BF16 / INT8 / INT4 |
layer_timing.parquet |
Per-layer forward and backward pass times |
memory_calibration.parquet |
Computed vs measured GPU memory usage |
inference_overhead.parquet |
Fitted framework overhead coefficients (PyTorch, vLLM) |
inference_overhead_measurements.parquet |
Raw per-model data for overhead regression |
system_load_snapshots.parquet |
CPU/GPU/RAM load at benchmark start and end |
telemetry_samples.parquet |
Time-series GPU utilization, memory, power, and clock data during runs |
GPU Coverage
Measurements span 20+ GPU instances across 9 GPU models:
- NVIDIA H200
- NVIDIA H100 80GB HBM3 / H100 NVL
- NVIDIA A100 SXM4 40GB / 80GB, A100 80GB PCIe
- NVIDIA L40S
- NVIDIA RTX 4090
- NVIDIA A10
- NVIDIA GeForce RTX 3090
- Apple M2 Max (MPS)
Model Coverage
| Model | Parameters |
|---|---|
| GPT-2 (small / medium / large / xl) | 124M – 1.5B |
| BERT-base-uncased | 110M |
| facebook/opt-1.3b | 1.3B |
| EleutherAI/pythia-1.4b | 1.4B |
| microsoft/phi-2 | 2.7B |
| google/gemma-2b | 2B |
Schema Version
schema_version: 1
The metadata.json file in each published revision records the schema version and row counts for each table.
Access
This dataset is gated. Request access and it will be granted to researchers and practitioners who want to validate or extend LLM training time estimation work.
Citation
If you use this dataset, please cite the associated project:
@misc{osteele2025llmcalibration,
author = {Steele, Oliver},
title = {LLM Training Calibration Database},
year = {2025},
url = {https://huggingface.co/datasets/osteele/llm-calibration-db}
}
- Downloads last month
- -