Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,4 +1,5 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
datasets:
|
| 3 |
- cognitivecomputations/dolphin-r1
|
| 4 |
- OpenCoder-LLM/opc-sft-stage1
|
|
@@ -16,9 +17,8 @@ datasets:
|
|
| 16 |
- m-a-p/Code-Feedback
|
| 17 |
language:
|
| 18 |
- en
|
| 19 |
-
base_model: cognitivecomputations/Dolphin3.0-R1-Mistral-24B
|
| 20 |
-
pipeline_tag: text-generation
|
| 21 |
library_name: transformers
|
|
|
|
| 22 |
tags:
|
| 23 |
- llama-cpp
|
| 24 |
- gguf-my-repo
|
|
@@ -28,67 +28,6 @@ tags:
|
|
| 28 |
This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-R1-Mistral-24B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 29 |
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B) for more details on the model.
|
| 30 |
|
| 31 |
-
---
|
| 32 |
-
What is Dolphin?
|
| 33 |
-
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
Dolphin 3.0 R1 is the next generation of the Dolphin series of
|
| 38 |
-
instruct-tuned models. Designed to be the ultimate general purpose
|
| 39 |
-
local model, enabling coding, math, agentic, function calling, and
|
| 40 |
-
general use cases.
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset.
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
Dolphin aims to be a general purpose reasoning instruct model,
|
| 47 |
-
similar to the models behind ChatGPT, Claude, Gemini. But these models
|
| 48 |
-
present problems for businesses seeking to include AI in their products.
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
They maintain control of the system prompt, deprecating and changing things as they wish, often causing software to break.
|
| 52 |
-
They maintain control of the model versions, sometimes changing
|
| 53 |
-
things silently, or deprecating older models that your business relies
|
| 54 |
-
on.
|
| 55 |
-
They maintain control of the alignment, and in particular the alignment is one-size-fits all, not tailored to the application.
|
| 56 |
-
They can see all your queries and they can potentially use that data
|
| 57 |
-
in ways you wouldn't want.
|
| 58 |
-
Dolphin, in contrast, is steerable and gives control to the system
|
| 59 |
-
owner. You set the system prompt. You decide the alignment. You have
|
| 60 |
-
control of your data. Dolphin does not impose its ethics or guidelines
|
| 61 |
-
on you. You are the one who decides the guidelines.
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
Dolphin belongs to YOU, it is your tool, an extension of your will.
|
| 65 |
-
Just as you are personally responsible for what you do with a knife,
|
| 66 |
-
gun, fire, car, or the internet, you are the creator and originator of
|
| 67 |
-
any content you generate with Dolphin.
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
https://erichartford.com/uncensored-models
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
Recommended Temperature
|
| 79 |
-
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
Experimentally we note that Mistral-24B based models require a low
|
| 84 |
-
temperature. We have seen much better results in the range of 0.05 to
|
| 85 |
-
0.1.
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
With Dolphin2.0-R1 a too-high temperature can result in behaviors
|
| 89 |
-
like second guessing and talking itself out of correct answers.
|
| 90 |
-
|
| 91 |
-
---
|
| 92 |
## Use with llama.cpp
|
| 93 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 94 |
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: cognitivecomputations/Dolphin3.0-R1-Mistral-24B
|
| 3 |
datasets:
|
| 4 |
- cognitivecomputations/dolphin-r1
|
| 5 |
- OpenCoder-LLM/opc-sft-stage1
|
|
|
|
| 17 |
- m-a-p/Code-Feedback
|
| 18 |
language:
|
| 19 |
- en
|
|
|
|
|
|
|
| 20 |
library_name: transformers
|
| 21 |
+
pipeline_tag: text-generation
|
| 22 |
tags:
|
| 23 |
- llama-cpp
|
| 24 |
- gguf-my-repo
|
|
|
|
| 28 |
This model was converted to GGUF format from [`cognitivecomputations/Dolphin3.0-R1-Mistral-24B`](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 29 |
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B) for more details on the model.
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
## Use with llama.cpp
|
| 32 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 33 |
|