DeepseekV3ForCausalLM

#5
by davidboring - opened

The diff reflects that most differences between modeling_glm4_moe_lite.py and modeling_deepseek_v3.py are just naming changes.

Even TODO is copied: https://github.com/huggingface/transformers/blob/main/src/transformers/models/glm4_moe_lite/modeling_glm4_moe_lite.py#L187-L188

Question: can we simply use DeepseekV3ForCausalLM here?

Right, for transformers, I just tried it too, and using DeepseekV3ForCausalLM for simple conversation works, but for sglang and vLLM, they use different hooks, which could cause errors (especially sglang, kernel is different and now still in progress)

As for why even the Todos are the same, it's because the Attention implement in transformers is indeed completely identical to DeepseekV3Attention, and using modular will copy all tools from DeepseekV3.

What if we simply inherit from the DeepseekV3 modular/modeling module rather than duplicating the code?

The Qwen model appears to follow this approach by extending parts of code from its previous version without duplication: https://github.com/huggingface/transformers/blob/9ed801f3ef0029e3733bbd2c9f9f9866912412a2/src/transformers/models/qwen2_5_omni/modular_qwen2_5_omni.py#L2064

I remain skeptical that duplicating the code achieves any meaningful benefit; it strikes me as poor practice to omit acknowledgment for the original author's work.

Additionally, you mentioned this was intended to prevent issues with sglang and vLLM, yet:

Sorry for the oversight earlier - the modular file does inherit the DSv3 module: https://github.com/huggingface/transformers/blob/main/src/transformers/models/glm4_moe_lite/modular_glm4_moe_lite.py

And it looks like the only thing added on top of DSv3 is to ignore the MTP layer.

davidboring changed pull request status to closed

Sign up or log in to comment