Update inference examples to use the correct chat template

#12
by mario-sanz - opened

Hey there! πŸ‘‹

I noticed that the current Python examples for transformers and vllm aren't using the chat template. It seems like these examples might have been intended for the base model, but since this is the Think version, skipping the specific formatting causes the model to generate unexpected or low-quality outputs.

I’ve updated the code snippets to use apply_chat_template so the prompts are formatted exactly how the model expects (handling the <|im_start|> and <|think|> tokens automatically). This should make the examples work much smoother for new users!

Thanks for releasing the model! πŸš€

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment