Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,56 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: odc-by
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: odc-by
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# MaLA Corpus: Massive Language Adaptation Corpus
|
| 6 |
+
|
| 7 |
+
This is a cleaned version with some necessary data cleaning.
|
| 8 |
+
|
| 9 |
+
## Dataset Summary
|
| 10 |
+
|
| 11 |
+
The **MaLA Corpus** (Massive Language Adaptation) is a comprehensive, multilingual dataset designed to support the continual pre-training of large language models. It covers **939 languages** and consists of over **74 billion tokens**, making it one of the largest datasets of its kind. With a focus on improving the representation of low-resource languages, the MaLA Corpus is a critical resource for advancing multilingual models, particularly those aimed at serving underrepresented languages.
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## Key Features
|
| 16 |
+
|
| 17 |
+
- **Language Coverage**: Includes data for **939 languages**, with **546 languages** having over 100,000 tokens.
|
| 18 |
+
- **Pre-processing**: The corpus is cleaned and deduplicated to ensure high-quality training data.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
## Dataset Structure
|
| 24 |
+
|
| 25 |
+
The MaLA Corpus is structured to accommodate a wide variety of data types and tasks:
|
| 26 |
+
|
| 27 |
+
- **Languages**: The dataset spans **939 languages**. The top 546 languages have over 100k tokens, with the remaining 393 languages contributing smaller but valuable amounts of data.
|
| 28 |
+
- **Tokens**: More than **74 billion tokens** in total, making it suitable for training large multilingual models.
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
## Dataset Creation
|
| 32 |
+
|
| 33 |
+
The MaLA Corpus was created by aggregating data from a variety of sources, followed by rigorous pre-processing to ensure the quality of the data:
|
| 34 |
+
|
| 35 |
+
- **Cleaning**: Noisy and irrelevant data was removed to ensure higher data quality.
|
| 36 |
+
- **Deduplication**: Duplicate entries across multiple sources were eliminated.
|
| 37 |
+
- **Normalization**: The data was normalized, and language codes were standardized to ISO 639-3 to ensure consistency across all sources.
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
## Intended Use
|
| 42 |
+
|
| 43 |
+
The MaLA Corpus is intended for researchers and developers looking to improve the multilingual capabilities of language models. It is especially useful for:
|
| 44 |
+
|
| 45 |
+
- **Continual pre-training** of large language models, such as Llama or XLM-R, to enhance their performance in low-resource languages.
|
| 46 |
+
- **Multilingual tasks** such as machine translation, open-ended generation, and commonsense reasoning.
|
| 47 |
+
- **Training and fine-tuning models** on multilingual benchmarks to improve language coverage across a variety of domains.
|
| 48 |
+
|
| 49 |
+
---
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
## Acknowledgements
|
| 53 |
+
|
| 54 |
+
We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research.
|
| 55 |
+
|
| 56 |
+
This work is created by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu).
|