shafire/talktoai-F16-GGUF
This LoRA adapter was converted to GGUF format from shafire/talktoai via the ggml.ai's GGUF-my-lora space.
Refer to the original adapter repository for more details.
LICENSE: Zero Public Licence v1.0 Section 1 โ Safety layer must stay intact. Section 2 โ Export to states under UK embargo requires licence. Section 3 โ Author disclaims forks that remove Section 1 or 2.
Use with llama.cpp
# with cli
llama-cli -m base_model.gguf --lora talktoai-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora talktoai-f16.gguf (...other args)
To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.
- Downloads last month
- 27
Hardware compatibility
Log In
to view the estimation
16-bit
Model tree for shafire/talktoai-F16-GGUF
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct
Finetuned
shafire/talktoai