LLaDA-Sample-ES / README.md
Fredtt3's picture
Update README.md
3d4fc5d verified
|
raw
history blame
965 Bytes
metadata
task_categories:
  - text-generation
language:
  - es
size_categories:
  - 100M<n<1B

Dataset: LLaDA-Sample-ES Base: crscardellino/spanish_billion_words Purpose: Training LLaDA (Large Language Diffusion Models)

Preprocessing

  • Tokenizer: GSAI-ML/LLaDA-8B-Instruct
  • Chunking: Up to 4,096 tokens per chunk (1% of chunks randomly sized between 1–4,096 tokens)
  • Noisy masking: Applied with noise factor ε = 1×10⁻³
  • Fields per chunk (PyTorch tensors):
    • input_ids
    • noisy_input_ids
    • mask
    • t (time scalar)

Statistics

  • Total chunks: ~ 652,089
  • Shards: 65 .pt files
  • Chunks per file: 10,000
  • Average file size: ~702–708 MB
  • Total size: ~46 GB

Usage

This dataset is used for training in the LLaDA-from-scratch GitHub repository, where you’ll find the full data pipeline and training scripts.