File size: 7,248 Bytes
5be3078 7199051 5be3078 abf42c7 eb1cc9e abf42c7 eb1cc9e abf42c7 5be3078 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
license: apache-2.0
base_model: amazingvince/zephyr-smol_llama-100m-sft-full
tags:
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: zephyr-smol_llama-100m-sft-full
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## amazingvince/zephyr-smol_llama-100m-sft-full - GGUF
This repo contains GGUF format model files for [amazingvince/zephyr-smol_llama-100m-sft-full](https://huggingface.co/amazingvince/zephyr-smol_llama-100m-sft-full).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π Try it now! π</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [zephyr-smol_llama-100m-sft-full-Q2_K.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q2_K.gguf) | Q2_K | 0.048 GB | smallest, significant quality loss - not recommended for most purposes |
| [zephyr-smol_llama-100m-sft-full-Q3_K_S.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q3_K_S.gguf) | Q3_K_S | 0.054 GB | very small, high quality loss |
| [zephyr-smol_llama-100m-sft-full-Q3_K_M.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q3_K_M.gguf) | Q3_K_M | 0.056 GB | very small, high quality loss |
| [zephyr-smol_llama-100m-sft-full-Q3_K_L.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q3_K_L.gguf) | Q3_K_L | 0.059 GB | small, substantial quality loss |
| [zephyr-smol_llama-100m-sft-full-Q4_0.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q4_0.gguf) | Q4_0 | 0.064 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [zephyr-smol_llama-100m-sft-full-Q4_K_S.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q4_K_S.gguf) | Q4_K_S | 0.064 GB | small, greater quality loss |
| [zephyr-smol_llama-100m-sft-full-Q4_K_M.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q4_K_M.gguf) | Q4_K_M | 0.065 GB | medium, balanced quality - recommended |
| [zephyr-smol_llama-100m-sft-full-Q5_0.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q5_0.gguf) | Q5_0 | 0.074 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [zephyr-smol_llama-100m-sft-full-Q5_K_S.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q5_K_S.gguf) | Q5_K_S | 0.074 GB | large, low quality loss - recommended |
| [zephyr-smol_llama-100m-sft-full-Q5_K_M.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q5_K_M.gguf) | Q5_K_M | 0.074 GB | large, very low quality loss - recommended |
| [zephyr-smol_llama-100m-sft-full-Q6_K.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q6_K.gguf) | Q6_K | 0.084 GB | very large, extremely low quality loss |
| [zephyr-smol_llama-100m-sft-full-Q8_0.gguf](https://huggingface.co/tensorblock/zephyr-smol_llama-100m-sft-full-GGUF/blob/main/zephyr-smol_llama-100m-sft-full-Q8_0.gguf) | Q8_0 | 0.108 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/zephyr-smol_llama-100m-sft-full-GGUF --include "zephyr-smol_llama-100m-sft-full-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/zephyr-smol_llama-100m-sft-full-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|