Commit
·
c95789f
1
Parent(s):
794c86c
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: ms
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# t5-3x-super-tiny-standard-bahasa-cased
|
| 6 |
+
|
| 7 |
+
Pretrained T5 3x-super-tiny standard language model for Malay.
|
| 8 |
+
|
| 9 |
+
## Pretraining Corpus
|
| 10 |
+
|
| 11 |
+
`t5-3x-super-tiny-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
|
| 12 |
+
|
| 13 |
+
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
|
| 14 |
+
2. News title prediction on bahasa news.
|
| 15 |
+
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
|
| 16 |
+
4. Translated QA Natural.
|
| 17 |
+
5. Text Similarity task on translated SNLI and translated MNLI.
|
| 18 |
+
6. EN-MS translation.
|
| 19 |
+
7. MS-EN translation.
|
| 20 |
+
8. Abstractive Summarization.
|
| 21 |
+
9. Knowledge Graph triples generation.
|
| 22 |
+
10. Paraphrase.
|
| 23 |
+
|
| 24 |
+
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
|
| 25 |
+
|
| 26 |
+
## Pretraining details
|
| 27 |
+
|
| 28 |
+
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
|
| 29 |
+
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
|
| 30 |
+
|
| 31 |
+
## Load Pretrained Model
|
| 32 |
+
|
| 33 |
+
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
|
| 34 |
+
|
| 35 |
+
```python
|
| 36 |
+
from transformers import T5Tokenizer, T5Model
|
| 37 |
+
|
| 38 |
+
model = T5Model.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
|
| 39 |
+
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## Example using T5ForConditionalGeneration
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
+
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
| 46 |
+
|
| 47 |
+
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
|
| 48 |
+
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-3x-super-tiny-bahasa-cased')
|
| 49 |
+
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
|
| 50 |
+
outputs = model.generate(input_ids)
|
| 51 |
+
print(tokenizer.decode(outputs[0]))
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
Output is,
|
| 55 |
+
|
| 56 |
+
```
|
| 57 |
+
'Mahathir Mohamad'
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
## Supported prefix
|
| 61 |
+
|
| 62 |
+
1. `soalan: {string}`, trained using Natural QA.
|
| 63 |
+
2. `ringkasan: {string}`, for abstractive summarization.
|
| 64 |
+
3. `tajuk: {string}`, for abstractive title.
|
| 65 |
+
4. `parafrasa: {string}`, for abstractive paraphrase.
|
| 66 |
+
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
|
| 67 |
+
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
|
| 68 |
+
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
|
| 69 |
+
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
|