metadata
task_categories:
- text-generation
language:
- es
size_categories:
- 100M<n<1B
Dataset: LLaDA-Sample-ES
Base: crscardellino/spanish_billion_words
Purpose: Training LLaDA (Large Language Diffusion Models)
Preprocessing
- Tokenizer:
GSAI-ML/LLaDA-8B-Instruct - Chunking: Up to 4,096 tokens per chunk (1% of chunks randomly sized between 1–4,096 tokens)
- Noisy masking: Applied with noise factor ε = 1×10⁻³
- Fields per chunk (PyTorch tensors):
input_idsnoisy_input_idsmaskt(time scalar)
Statistics
- Total chunks: ~ 652,089
- Shards: 65
.ptfiles - Chunks per file: 10,000
- Average file size: ~702–708 MB
- Total size: ~46 GB
Usage
This dataset is used for training in the LLaDA-from-scratch GitHub repository, where you’ll find the full data pipeline and training scripts.