Upload README.md
Browse files
README.md
CHANGED
|
@@ -63,18 +63,18 @@ This repo contains GGUF format model files for [Eric Hartford's Dolphin 2.5 Mixt
|
|
| 63 |
|
| 64 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
*
|
| 73 |
-
*
|
| 74 |
-
*
|
| 75 |
-
*
|
| 76 |
-
|
| 77 |
-
|
| 78 |
|
| 79 |
<!-- README_GGUF.md-about-gguf end -->
|
| 80 |
<!-- repositories-available start -->
|
|
@@ -103,9 +103,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
|
|
| 103 |
<!-- compatibility_gguf start -->
|
| 104 |
## Compatibility
|
| 105 |
|
| 106 |
-
These
|
| 107 |
-
|
| 108 |
-
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
| 109 |
|
| 110 |
## Explanation of quantisation methods
|
| 111 |
|
|
@@ -221,11 +219,13 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
|
|
| 221 |
|
| 222 |
## How to run in `text-generation-webui`
|
| 223 |
|
|
|
|
|
|
|
| 224 |
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
|
| 225 |
|
| 226 |
## How to run from Python code
|
| 227 |
|
| 228 |
-
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
| 229 |
|
| 230 |
### How to load this model in Python code, using llama-cpp-python
|
| 231 |
|
|
@@ -294,7 +294,6 @@ llm.create_chat_completion(
|
|
| 294 |
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
| 295 |
|
| 296 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
| 297 |
-
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
| 298 |
|
| 299 |
<!-- README_GGUF.md-how-to-run end -->
|
| 300 |
|
|
|
|
| 63 |
|
| 64 |
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
| 65 |
|
| 66 |
+
### Mixtral GGUF
|
| 67 |
+
|
| 68 |
+
Support for Mixtral was merged into Llama.cpp on December 13th.
|
| 69 |
+
|
| 70 |
+
These Mixtral GGUFs are known to work in:
|
| 71 |
+
|
| 72 |
+
* llama.cpp as of December 13th
|
| 73 |
+
* KoboldCpp 1.52 as later
|
| 74 |
+
* LM Studio 0.2.9 and later
|
| 75 |
+
* llama-cpp-python 0.2.23 and later
|
| 76 |
+
|
| 77 |
+
Other clients/libraries, not listed above, may not yet work.
|
| 78 |
|
| 79 |
<!-- README_GGUF.md-about-gguf end -->
|
| 80 |
<!-- repositories-available start -->
|
|
|
|
| 103 |
<!-- compatibility_gguf start -->
|
| 104 |
## Compatibility
|
| 105 |
|
| 106 |
+
These Mixtral GGUFs are compatible with llama.cpp from December 13th onwards. Other clients/libraries may not work yet.
|
|
|
|
|
|
|
| 107 |
|
| 108 |
## Explanation of quantisation methods
|
| 109 |
|
|
|
|
| 219 |
|
| 220 |
## How to run in `text-generation-webui`
|
| 221 |
|
| 222 |
+
Note that text-generation-webui may not yet be compatible with Mixtral GGUFs. Please check compatibility first.
|
| 223 |
+
|
| 224 |
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
|
| 225 |
|
| 226 |
## How to run from Python code
|
| 227 |
|
| 228 |
+
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) version 0.2.23 and later.
|
| 229 |
|
| 230 |
### How to load this model in Python code, using llama-cpp-python
|
| 231 |
|
|
|
|
| 294 |
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
| 295 |
|
| 296 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
|
|
|
| 297 |
|
| 298 |
<!-- README_GGUF.md-how-to-run end -->
|
| 299 |
|