Can you make this model with out the fix for llamacpp

#1
by jje720vGO - opened

This Model worked on ollama but since Ollama 11.6 multimodality is broken in ollama. because of that, I wanted to ask if you can make a normal version of huihui-ai/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated. This model was perfect until the update broke it on Ollama

Thanks

If multimodality is broken simply don't provide the mmproj file containing the vision stack. If ollama automatically recognizes this as vision model and so uses the mmproj file automatically just use https://huggingface.co/mradermacher/Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated-llamacppfixed-i1-GGUF which are the much better weighted/imatrix quants which don't contain the mmproj file as we only put them in the static quant repository.

Thanks for the sugestion. The Problem is, that I need Multimodality for that model. If i import the Model even with the mmproj file it works. But it can't describe images anymore.

I can only guess, the problem is caused by the tweek to make it work on llamacpp. But there is only this gguf version of that model. So I wanted to ask to make a Version of this Model with out the llamacppfixed. I am not sure if that wil fix that problem but I think maby it could help. As far As I know, Ollama implemented a new version of llamacpp reasently. Maby the implemented a fix for the llamacppfixed tweak so that the mistral model works with out the tweak in llamacpp. When Ollama implemented the llamacpp version, the tweaked model didn't work anymore.

Unfortunately, the non-llamacppfixed version, not quite surprisingly, does not work with llama.cpp, so no gguf can be made.

Okay good to know. Thanks. Then I will just hope ollama fixes the support for multimodal models in the next updates.

Sign up or log in to comment