Is this the pre-trained or the Instruct version?

#1
by nulled - opened

Just checking since it's hallucinating a lot.

This is the "final" version, they released 4 varieties of each, seems instruct is not the version you're meant to use (I tried instruct at first but it just kept generating it's EOS token mid sentence)

This model seems to generate a lot and doesn't respond to questions. I installed this model by using the ollama run hf_api_here approach. The model did download and was installed properly, but it appears that in both Open-webui and running this model with VQA tasks via ollama run ... approach caused the model to degenerate and not give a coherent output

llama.cpp seems to run it just fine, but I can't convince it to do any reasoning.

Run it with Ollama, I get non-sense reply.

@bartowski "Here, we also open-source the model weights after different training stages for potential research usage. If you're unsure which version to use, please select the one without any suffix, as it has completed the full training pipeline." You are correct!

The 30b-a3b in this repo is so good at vision tasks, it's become my default vlm. I hope more people learn about it.

They also recently released https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B-Flash with ViCo training. Would be nice if you can gguf it. Thank you.

Sign up or log in to comment