Update README.md
Browse files
README.md
CHANGED
|
@@ -20,6 +20,10 @@ NeuralBeagle14-7B is a DPO fine-tune of [mlabonne/Beagle14-7B](https://huggingfa
|
|
| 20 |
|
| 21 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). πͺ
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
## π Evaluation
|
| 24 |
|
| 25 |
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date.
|
|
|
|
| 20 |
|
| 21 |
Thanks [Argilla](https://huggingface.co/argilla) for providing the dataset and the training recipe [here](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp). πͺ
|
| 22 |
|
| 23 |
+
## π Usage
|
| 24 |
+
|
| 25 |
+
This model uses a context window of 8k. Compared to other 7B models, it displays good performance in instruction following and reasoning tasks. It can also be used for RP and storytelling.
|
| 26 |
+
|
| 27 |
## π Evaluation
|
| 28 |
|
| 29 |
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. It is the best 7B model to date.
|