Qwen3-Coder Tool Calling Fixes

#10
by danielhanchen - opened

Hey everyone! We managed to fix tool calling via llama.cpp --jinja specifically for serving through llama-server!

PLEASE NOTE: This issue was universal and affected all uploads (not just Unsloth) regardless of source/uploader, and we've communicated with the Qwen team about our fixes!

To get the latest updates, either do:

  1. Download the first file at https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF/tree/main/UD-Q2_K_XL for UD-Q2_K_XL, and replace your current file
  2. Use snapshot_download as usual as in https://docs.unsloth.ai/basics/qwen3-coder-how-to-run-locally#llama.cpp-run-qwen3-tutorial which will auto override the old files
  3. Use the new chat template via --chat-template-file. See GGUF chat template or chat_template.jinja
  4. As an extra, I also made 1 single 150GB UD-IQ1_M file (so Ollama works) at https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF/blob/main/Qwen3-Coder-480B-A35B-Instruct-UD-IQ1_M.gguf

This should solve issues like https://github.com/ggml-org/llama.cpp/issues/14915

danielhanchen pinned discussion

I tried the version and tool calling still crashed.
Here is my command

./build/bin/llama-server \
                         --alias Qwen3-480B-A35B-Instruct \
                         --model /root/models/Qwen3-Coder-480B-A35B-Instruct-GGUF/UD-Q4_K_XL/UD-Q4_K_XL/Qwen3-Coder-480B-A35B-Instruct-UD-Q4_K_XL-00001-of-00006.gguf \
                         --ctx-size 102400 \
                         --cache-type-k q8_0 \
                         --cache-type-v q8_0 \
                         -fa \
                         --temp 0.7 \
                         --top_p 0.8 \
                         --top_k 20 \
                         --n-gpu-layers 99 \
                         --override-tensor "blk\.[0-3]\.ffn_.*=CUDA0,exps=CPU" \
                         --parallel 1 \
                         --threads 104 \
                         --host 0.0.0.0 \
                         --port 8080 \
                         --min_p 0.001 \
                         --threads-batch 52 \
                         --jinja \
                         -b 8192 -ub 4096 \
                         --chat-template-file /root/models/Qwen3-Coder-480B-A35B-Instruct-GGUF/chat_template_working.jinja

I created a chat template that doesn't crash. Feel free to use it https://gist.github.com/iSevenDays/4583750a17ee453783cbaa3acd4ab5fc

I had a lot of different problems using qwen3-coder with OpenCode, see e.g.: https://github.com/sst/opencode/issues/1809
It just does not follow the rules for tool calling: problems were wrong formatting like array "hidden" in a string instead of a plain array or missing mandatory fields. And this even with the chat template that comes with unsloth/Qwen3-Coder-30B-A3B-Instruct or the template from the last post.
(When using the chat template with vLLM from @isevendays even all tool calls failed for me.)
I experimented with a number of chat templates - without success. (And I'm even not convinced that this is the root cause.)

As a fast and dirty workaround I created a small proxy that can live between the qwen3-coder and the client (e.g. OpenCode) that corrects the tool calls, e.g. adding the mandatory "description" field in the "bash" tool call if not provided by the LLM. https://github.com/florath/qwen3-call-patch-proxy

I'd be happy if somebody could tell me the root cause of these problems and provide a fix. I'd be more than happy to purge my hackisch workaround if not needed any longer.

I adapted the earlier posted version to get it to work in opencode https://gist.github.com/tifoji/8559819fa289f1fe26fa5fd86d62216f

llama-server \
    --model "Qwen3-Coder-480B-A35B-Instruct-UD-Q4_K_XL-00001-of-00006.gguf" \
    --alias qwen3-coder \
    --threads -1 \
    --n-gpu-layers 99 \
    -ot ".ffn_.*_exps.=CPU" \
    --temp 0.7 \
    --min-p 0.0 \
    --top-p 0.8 \
    --top-k 20 \
    --repeat-penalty 1.05 \
    --jinja \
    --reasoning-format auto \
    --no-context-shift \
    --ctx-size 131072 \
    --chat-template-file chat_template_llamacpp.jinja

First prompt "explain daylight savings time" took a long time to respond.

prompt eval time =  249554.33 ms /  9433 tokens (   26.46 ms per token,    37.80 tokens per second)
       eval time =   54438.50 ms /   224 tokens (  243.03 ms per token,     4.11 tokens per second)
      total time =  303992.83 ms /  9657 tokens
srv  update_slots: all slots are idle

Subsequent ones were faster , for example "what is the current temperature in san francisco" ... It tried a webfetch first but failed and made a bash curl tool call successfully

slot print_timing: id  0 | task 584 | 
prompt eval time =    1926.32 ms /    30 tokens (   64.21 ms per token,    15.57 tokens per second)
       eval time =   18938.54 ms /    81 tokens (  233.81 ms per token,     4.28 tokens per second)
      total time =   20864.86 ms /   111 tokens

Tested on mac studio 512GB

Sign up or log in to comment