GGUF update required for Qwen3-VL support with official builds of llama.cpp
The official llama.cpp-b6907 has now been updated to support Qwen3-VL conversion to GGUF format and can be tested using llama-mtmd-cli.
The GGUF file has been uploaded.
· Sign up or log in to comment