File size: 965 Bytes
89087e8
 
 
 
 
 
 
3350d2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46f2f4c
 
3350d2d
 
3d4fc5d
3350d2d
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
task_categories:
- text-generation
language:
- es
size_categories:
- 100M<n<1B
---

**Dataset:** LLaDA-Sample-ES 
**Base:** `crscardellino/spanish_billion_words`
**Purpose:** Training LLaDA (Large Language Diffusion Models)

## Preprocessing
- **Tokenizer:** `GSAI-ML/LLaDA-8B-Instruct`  
- **Chunking:** Up to **4,096 tokens** per chunk (1% of chunks randomly sized between 1–4,096 tokens)  
- **Noisy masking:** Applied with noise factor ε = 1×10⁻³  
- **Fields per chunk (PyTorch tensors):**  
  - `input_ids`  
  - `noisy_input_ids`  
  - `mask`  
  - `t` (time scalar)

## Statistics
- **Total chunks:** ~ 652,089
- **Shards:** 65 `.pt` files  
- **Chunks per file:** 10,000  
- **Average file size:** ~702–708 MB  
- **Total size:** ~46 GB

## Usage
This dataset is used for training in the [LLaDA-from-scratch](https://github.com/F4k3r22/LLaDA-from-scratch) GitHub repository, where you’ll find the full data pipeline and training scripts.