Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 401298958 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

RedPajama-Data-V2-100M

Dataset Description

This is a 100.0 Million token subset of krisbailey/RedPajama-Data-V2-1B, which is a subset of togethercomputer/RedPajama-Data-V2.

Motivation

100M tokens is a standard size for:

  • CI/CD Pipelines: Fast enough to download and train for unit tests.
  • Debugging: Verifying training loops without waiting for hours.
  • Scaling Laws: The first step in a logarithmic scaling series (100M -> 1B -> 10B).

Dataset Details

  • Total Tokens: 99,999,721
  • Source: krisbailey/RedPajama-Data-V2-1B
  • Structure: First ~10% of the randomized 1B dataset.
  • Format: Parquet (Snappy compression) - Single File
  • Producer: Kris Bailey (kris@krisbailey.com)

Usage

from datasets import load_dataset

ds = load_dataset("krisbailey/RedPajama-Data-V2-100M", split="train")
print(ds[0])

Citation

@article{together2023redpajama,
  title={RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  author={Together Computer},
  journal={https://github.com/togethercomputer/RedPajama-Data},
  year={2023}
}
Downloads last month
28

Collection including krisbailey/RedPajama-Data-V2-100M