how to use with llama.cpp when mmproj-model-f16.gguf is missing

#1
by hermes42 - opened

I cannot find any source for mmproj-model-f16.gguf which is needed for running with llama.cpp

OpenBMB org

https://huggingface.co/openbmb/MiniCPM-o-4_5-gguf/blob/main/vision/MiniCPM-o-4_5-vision-F16.gguf

This is the original mproj, which is used to provide the vison part of gguf. Because the omni model has so many modules, I want to use the unified name as much as possible. It will be very confusing to continue to use the name mproj.

is there any UI interface that you provide for inference and to use the capabilities of the model ? since LM studio doesnt even recognize the vision capabilities

OpenBMB org

Yes, we will provide a full set of demo code and a packaged docker that can be easily deployed by users, which is being processed. We hope to allow community users to truly use it on their own mac with the same effect as the online demo.

使用上llama-server运行时,使用提供的https://huggingface.co/openbmb/MiniCPM-o-4_5-gguf/blob/main/vision/MiniCPM-o-4_5-vision-F16.gguf,加载了mmproj后,会报错GGML_ASSERT(false && "unsupported minicpmv version") failed。使用ollama下载运行,也会报这样的错误 Error: 500 Internal Server Error: llama runner process has terminated: GGML_ASSERT(false && "unsupported minicpmv version") failed

OpenBMB org

@lan0004
MiniCPM-o 4.5图文能力的更新已经合入llama.cp,您可以pull最新的代码来使用。
ollama可以用我们提供的分支https://github.com/tc-mb/ollama/tree/Suppport-MiniCPM-o-4.5,来编译执行后使用。

感谢回复,我试试

运行成功了,非常感谢。

how to use it with llama.cpp ? vision Futers ? what files download and run ? please

Sign up or log in to comment