Datasets:
Improve dataset card with paper link, task category, and citation
Browse filesThis PR improves the dataset card by:
- Adding a link to the associated paper: [Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data](https://arxiv.org/abs/2506.00469)
- Specifying the `text-generation` task category in the metadata.
- Correcting and updating the citation to reflect the paper.
README.md
CHANGED
|
@@ -1,5 +1,11 @@
|
|
| 1 |
---
|
| 2 |
license: odc-by
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
# MaLA Corpus: Massive Language Adaptation Corpus
|
|
@@ -8,7 +14,7 @@ This is a cleaned version with some necessary data cleaning.
|
|
| 8 |
|
| 9 |
## Dataset Summary
|
| 10 |
|
| 11 |
-
The **MaLA Corpus** (Massive Language Adaptation) is a comprehensive, multilingual dataset designed to support the continual pre-training of large language models. It covers **939 languages** and consists of over **74 billion tokens**, making it one of the largest datasets of its kind. With a focus on improving the representation of low-resource languages, the MaLA Corpus is a critical resource for advancing multilingual models, particularly those aimed at serving underrepresented languages.
|
| 12 |
|
| 13 |
---
|
| 14 |
|
|
@@ -57,12 +63,12 @@ We will comply with legitimate requests by removing the affected sources from th
|
|
| 57 |
## Citation
|
| 58 |
|
| 59 |
```
|
| 60 |
-
@article{
|
| 61 |
-
title={
|
| 62 |
-
author={Shaoxiong Ji
|
| 63 |
-
year={
|
| 64 |
-
journal={arXiv preprint
|
| 65 |
-
url={https://arxiv.org/abs/
|
| 66 |
}
|
| 67 |
```
|
| 68 |
|
|
@@ -72,3 +78,5 @@ We will comply with legitimate requests by removing the affected sources from th
|
|
| 72 |
We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research.
|
| 73 |
|
| 74 |
This work is done by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu).
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: odc-by
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
tags:
|
| 6 |
+
- multilingual
|
| 7 |
+
- translation
|
| 8 |
+
- low-resource
|
| 9 |
---
|
| 10 |
|
| 11 |
# MaLA Corpus: Massive Language Adaptation Corpus
|
|
|
|
| 14 |
|
| 15 |
## Dataset Summary
|
| 16 |
|
| 17 |
+
The **MaLA Corpus** (Massive Language Adaptation) is a comprehensive, multilingual dataset designed to support the continual pre-training of large language models. It covers **939 languages** and consists of over **74 billion tokens**, making it one of the largest datasets of its kind. With a focus on improving the representation of low-resource languages, the MaLA Corpus is a critical resource for advancing multilingual models, particularly those aimed at serving underrepresented languages. This dataset supports the work presented in [Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data](https://arxiv.org/abs/2506.00469).
|
| 18 |
|
| 19 |
---
|
| 20 |
|
|
|
|
| 63 |
## Citation
|
| 64 |
|
| 65 |
```
|
| 66 |
+
@article{ji2025massivelymultilingualadaptation,
|
| 67 |
+
title={Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data},
|
| 68 |
+
author={Shaoxiong Ji, Zihao Li, Jaakko Paavola, Indraneil Paul, Hengyu Luo, and Jörg Tiedemann},
|
| 69 |
+
year={2025},
|
| 70 |
+
journal={arXiv preprint arXiv:2506.00469},
|
| 71 |
+
url={https://arxiv.org/abs/2506.00469},
|
| 72 |
}
|
| 73 |
```
|
| 74 |
|
|
|
|
| 78 |
We extend our thanks to the language communities and contributors who helped source, clean, and validate the diverse data used in the MaLA Corpus. Their efforts are invaluable in supporting linguistic diversity in AI research.
|
| 79 |
|
| 80 |
This work is done by researchers at [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) in collaboration with partners from TU Darmstadt, the University of Edinburgh, and LMU Munich. It is funded by [HPLT](https://hplt-project.org) and [UTTER](https://he-utter.eu).
|
| 81 |
+
|
| 82 |
+
[Github](https://github.com/mala-lm/emma-500) | [Project Page](https://mala-lm.github.io/emma-500-gen2)
|