Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,24 @@ tags:
|
|
| 12 |
This model was converted to GGUF format from [`SicariusSicariiStuff/Dusk_Rainbow`](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 13 |
Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) for more details on the model.
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
## Use with llama.cpp
|
| 16 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 17 |
|
|
|
|
| 12 |
This model was converted to GGUF format from [`SicariusSicariiStuff/Dusk_Rainbow`](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 13 |
Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow) for more details on the model.
|
| 14 |
|
| 15 |
+
---
|
| 16 |
+
Censorship level: Very low
|
| 17 |
+
|
| 18 |
+
9.1 / 10 (10 completely uncensored)
|
| 19 |
+
|
| 20 |
+
Intended use: Creative Writing, General tasks.
|
| 21 |
+
|
| 22 |
+
This model is the result of training a fraction (16M tokens) of the testing data Intended for LLAMA-3_8B_Unaligned's upcoming beta.
|
| 23 |
+
The base model is a merge of merges, made by Invisietch's and named EtherealRainbow-v0.3-8B.
|
| 24 |
+
The name for this model reflects the base that was used for this
|
| 25 |
+
finetune while hinting a darker, and more uncensored aspects associated
|
| 26 |
+
with the nature of the LLAMA-3_8B_Unaligned project.
|
| 27 |
+
|
| 28 |
+
As a result of the unique data added, this model has an exceptional
|
| 29 |
+
adherence to instructions about paragraph length, and to the story
|
| 30 |
+
writing prompt. I would like to emphasize, no ChatGPT \ Claude was used for any of the additional data I added in this finetune. The goal is to eventually have a model with a minimal amount of slop, this cannot be reliably done by relying on API models, which pollute datasets with their bias and repetitive words.
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
## Use with llama.cpp
|
| 34 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 35 |
|