Upload folder using huggingface_hub
#1
by
MaziyarPanahi
- opened
- .gitattributes +6 -0
- Fino1-8B-GGUF_imatrix.dat +3 -0
- Fino1-8B.Q5_K_M.gguf +3 -0
- Fino1-8B.Q5_K_S.gguf +3 -0
- Fino1-8B.Q6_K.gguf +3 -0
- Fino1-8B.Q8_0.gguf +3 -0
- Fino1-8B.fp16.gguf +3 -0
- README.md +45 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Fino1-8B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
Fino1-8B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
Fino1-8B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
Fino1-8B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
Fino1-8B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
Fino1-8B-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
Fino1-8B-GGUF_imatrix.dat
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ca6d1f16d04bfc729d3856d0b66e4bde5c2047dcc7f68e14ee2aa197304d6880
|
| 3 |
+
size 4988146
|
Fino1-8B.Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ccd7337b49b008c3ab8a00e70e4b49936bf0b2eaab55c70da04747315030f43
|
| 3 |
+
size 5732992832
|
Fino1-8B.Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c1d16c168b393e4617a8305daa9d10227b8ad2de14054ecbd445383ec4497579
|
| 3 |
+
size 5599299392
|
Fino1-8B.Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6ba51f6ec66062e2e35f7dc13652cc165b0d09181e504b6eee194cbae6cfe358
|
| 3 |
+
size 6596011840
|
Fino1-8B.Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:14fcd10bf29c1829b6daf17e66fd1920dccb2980665fd04ba472f899657852fa
|
| 3 |
+
size 8540776256
|
Fino1-8B.fp16.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a5b42ef2cc22595e2a7a16ea9eb85be9d96eabd7d6082a39954ff0344c9988b4
|
| 3 |
+
size 16068896352
|
README.md
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model: TheFinAI/Fino1-8B
|
| 3 |
+
inference: false
|
| 4 |
+
model_creator: TheFinAI
|
| 5 |
+
model_name: Fino1-8B-GGUF
|
| 6 |
+
pipeline_tag: text-generation
|
| 7 |
+
quantized_by: MaziyarPanahi
|
| 8 |
+
tags:
|
| 9 |
+
- quantized
|
| 10 |
+
- 2-bit
|
| 11 |
+
- 3-bit
|
| 12 |
+
- 4-bit
|
| 13 |
+
- 5-bit
|
| 14 |
+
- 6-bit
|
| 15 |
+
- 8-bit
|
| 16 |
+
- GGUF
|
| 17 |
+
- text-generation
|
| 18 |
+
---
|
| 19 |
+
# [MaziyarPanahi/Fino1-8B-GGUF](https://huggingface.co/MaziyarPanahi/Fino1-8B-GGUF)
|
| 20 |
+
- Model creator: [TheFinAI](https://huggingface.co/TheFinAI)
|
| 21 |
+
- Original model: [TheFinAI/Fino1-8B](https://huggingface.co/TheFinAI/Fino1-8B)
|
| 22 |
+
|
| 23 |
+
## Description
|
| 24 |
+
[MaziyarPanahi/Fino1-8B-GGUF](https://huggingface.co/MaziyarPanahi/Fino1-8B-GGUF) contains GGUF format model files for [TheFinAI/Fino1-8B](https://huggingface.co/TheFinAI/Fino1-8B).
|
| 25 |
+
|
| 26 |
+
### About GGUF
|
| 27 |
+
|
| 28 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
| 29 |
+
|
| 30 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
| 31 |
+
|
| 32 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
| 33 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
| 34 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
| 35 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
| 36 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
| 37 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
| 38 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
| 39 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
| 40 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
| 41 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|
| 42 |
+
|
| 43 |
+
## Special thanks
|
| 44 |
+
|
| 45 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|