Should the chat template include <think> in the generation prompt for tool calling?
Hi! I've been experimenting with Kimi-K2-Instruct for tool calling use cases and noticed some intermittent issues I wanted to ask about.
What I'm seeing:
When using the model with tools enabled, I occasionally get incomplete responses where the model seems to stop mid-thought. Looking at the raw token output, I noticed it sometimes generates <|im_end|> before completing the tool call.
After some debugging, I noticed the current chat template's generation prompt is:
{%- if add_generation_prompt -%}
<|im_assistant|>assistant<|im_middle|>
{%- endif -%}
My question:
Since Kimi-K2 is a thinking model that outputs ... blocks, should the generation prompt include the opening tag to ensure the model starts in thinking mode?
Something like:
{%- if add_generation_prompt -%}
<|im_assistant|>assistant<|im_middle|><think>
{%- endif -%}
I noticed that when I manually add to the generation prompt, the model's tool calling behavior becomes more consistent. But I'm not sure if this is the intended usage or if I'm missing something in my setup.
Context
- Using the model via vLLM with --enable-auto-tool-choice
--tool-call-parser kimi_k2 - Temperature: 1.0, top_p: 1.0 (default settings)
- The issue happens roughly 1 in 20 requests
Would love to hear if this is expected behavior or if there's a recommended way to handle this. Thanks!