🦙 Meta-Llama-3.1-8B-Instruct-abliterated
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
⚡️ Quantization
Thanks to ZeroWw and Apel-sin for the quants.
- New GGUF: https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
- ZeroWw GGUF: https://huggingface.co/ZeroWw/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
- EXL2: https://huggingface.co/Apel-sin/llama-3.1-8B-abliterated-exl2
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 23.13 |
| IFEval (0-Shot) | 73.29 |
| BBH (3-Shot) | 27.13 |
| MATH Lvl 5 (4-Shot) | 6.42 |
| GPQA (0-shot) | 0.89 |
| MuSR (0-shot) | 3.21 |
| MMLU-PRO (5-shot) | 27.81 |
- Downloads last month
- 5
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Sriexe/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard73.290
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard27.130
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard6.420
- acc_norm on GPQA (0-shot)Open LLM Leaderboard0.890
- acc_norm on MuSR (0-shot)Open LLM Leaderboard3.210
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard27.810
