Update README.md
Browse files
README.md
CHANGED
|
@@ -29,6 +29,7 @@ SambaLingo-Slovenian-Base is a pretrained Bi-lingual Slovenian and English model
|
|
| 29 |
- **Language(s):** Slovenian, English
|
| 30 |
- **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
| 31 |
- **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
|
|
|
|
| 32 |
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
|
| 33 |
|
| 34 |
## Getting Started
|
|
@@ -54,16 +55,7 @@ All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonl
|
|
| 54 |
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
|
| 55 |
|
| 56 |
## Evaluation
|
| 57 |
-
|
| 58 |
-
| | SambaLingo-Slovenian-Base | sl-gpt2 | bloom-7b1 | xglm-7.5B | mGPT-13B |
|
| 59 |
-
|-------------------------------|---------|-----------|-----------|----------|--------|
|
| 60 |
-
| Perplexity (Lower Is Better) | **1.678** | - | 3.261 | 4.201 | 3.428 |
|
| 61 |
-
| FLORES en->sl (8 shot, CHRF) | **0.508** | 0.072 | 0.143 | 0.068 | 0.062 |
|
| 62 |
-
| FLORES sl->en (8 shot, CHRF) | **0.565** | 0.066 | 0.182 | 0.184 | 0.058 |
|
| 63 |
-
| FLORES en->sl (8 shot, BLEU) | **0.202** | 0.000 | 0.004 | 0.152 | 0.000 |
|
| 64 |
-
| FLORES sl->en (8 shot, BLEU) | **0.273** | 0.000 | 0.010 | 0.007 | 0.000 |
|
| 65 |
-
| Belebele (3 shot) | **42.78%** | 26.11% | 23.44% | 23.33% | 23.89% |
|
| 66 |
-
| SIB-200 (3 shot) | **56.37%** | - | 41.18% | 50.00% | 40.69% |
|
| 67 |
|
| 68 |
## Uses
|
| 69 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
@@ -105,12 +97,12 @@ We would like to give a special thanks to the following groups:
|
|
| 105 |
|
| 106 |
## Cite SambaLingo
|
| 107 |
```
|
| 108 |
-
@
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
}
|
| 116 |
```
|
|
|
|
| 29 |
- **Language(s):** Slovenian, English
|
| 30 |
- **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
| 31 |
- **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
|
| 32 |
+
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
|
| 33 |
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
|
| 34 |
|
| 35 |
## Getting Started
|
|
|
|
| 55 |
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
|
| 56 |
|
| 57 |
## Evaluation
|
| 58 |
+
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
## Uses
|
| 61 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
|
| 97 |
|
| 98 |
## Cite SambaLingo
|
| 99 |
```
|
| 100 |
+
@misc{csaki2024sambalingo,
|
| 101 |
+
title={SambaLingo: Teaching Large Language Models New Languages},
|
| 102 |
+
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
|
| 103 |
+
year={2024},
|
| 104 |
+
eprint={2404.05829},
|
| 105 |
+
archivePrefix={arXiv},
|
| 106 |
+
primaryClass={cs.CL}
|
| 107 |
}
|
| 108 |
```
|