Update README.md
Browse files
README.md
CHANGED
|
@@ -167,7 +167,19 @@ Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatr
|
|
| 167 |
|
| 168 |
# Citation
|
| 169 |
|
| 170 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 171 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
| 172 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/tiiuae__falcon-mamba-7b-details)
|
| 173 |
|
|
|
|
| 167 |
|
| 168 |
# Citation
|
| 169 |
|
| 170 |
+
You can use the following bibtex citation:
|
| 171 |
+
```
|
| 172 |
+
@misc{zuo2024falconmambacompetitiveattentionfree,
|
| 173 |
+
title={Falcon Mamba: The First Competitive Attention-free 7B Language Model},
|
| 174 |
+
author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid},
|
| 175 |
+
year={2024},
|
| 176 |
+
eprint={2410.05355},
|
| 177 |
+
archivePrefix={arXiv},
|
| 178 |
+
primaryClass={cs.CL},
|
| 179 |
+
url={https://arxiv.org/abs/2410.05355},
|
| 180 |
+
}
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
| 184 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/tiiuae__falcon-mamba-7b-details)
|
| 185 |
|