DeepPulse-80B TCM Large Model Series

DeepPulse (深度把脉) is the core achievement of 心语心言's open-source Traditional Chinese Medicine (TCM) large model series. This series of models uses Qwen3-Next-80B as the base model and has undergone deep fine-tuning using a self-built high-quality TCM clinical medical dataset. This release includes two versions:

  • DeepPulse-80B-Thinking-V0.1: Focuses on complex clinical reasoning and assisted diagnosis, achieving first place in total score in public evaluations, demonstrating top-tier logical reasoning capabilities in the TCM domain.
  • DeepPulse-80B-Instruct-V0.1: Possesses excellent TCM instruction-following capabilities, suitable for a wide range of TCM Q&A and interactive scenarios, with a comprehensive ranking of sixth.

Public TCM Benchmark Metrics Comparison (MedBench - TCM-5CEval)

TCM-5CEval is an authoritative evaluation benchmark for TCM large models, comprising the following five subtasks that comprehensively assess the model's TCM capabilities:

  • TCM-Exam (中医考试): Evaluates the mastery and application of fundamental TCM theories (Yin-Yang, Zang-Fu organs, etc.) and diagnostics knowledge.
  • TCM-LitQA (典籍问答): Tests deep understanding and reasoning of classic TCM texts such as "Huangdi Neijing" and "Shanghan Lun".
  • TCM-MRCD (临床诊疗): Simulates real clinical scenarios, evaluating the model's ability to analyze medical cases, perform pattern differentiation, and make prescription decisions.
  • TCM-CMM (中药方剂): Measures the model's knowledge of Chinese materia medica properties, effects, compatibility contraindications, and formula applications.
  • TCM-ClinNPT (非药物疗法): Assesses ability in acupoint selection for acupuncture, Tuina massage techniques, and pattern-based treatment for specific clinical scenarios.
No. Model Name Organization/Team Name Release Date Type Parameters Total Score TCM-Exam TCM-LitQA TCM-MRCD TCM-CMM TCM-ClinNPT
1 DeepPulse-80B-Thinking-V0.1 心语心言 2025/12/23 开源 80B 71.3 83.0 45.5 75.4 84.9 67.6
2 HKR_TCM_HW_v1 港仔机器人主动健管团队 2025/12/12 闭源 671B 70.8 85.4 44.2 73.1 83.8 67.5
3 Gemini-2.5-Pro-nothinking Google 2025/03/25 闭源 N/A 69.2 77.9 62.0 72.4 72.6 61.2
4 DeepSeek-V3.2 DeepSeek 2025/12/01 开源 671B 66.8 74.5 44.4 66.8 80.0 68.3
5 Grok-4 xAI 2025/07/09 闭源 N/A 66.6 73.0 59.3 68.4 68.0 64.2
6 DeepPulse-80B-Instruct-V0.1 心语心言 2025/12/23 开源 80B 66.2 74.4 40.7 70.6 79.7 65.6
7 Qwen3-235B-A22B-Thinking-2507 Alibaba 2025/08/17 开源 235B 64.8 75.5 40.3 68.5 78.2 61.5
8 Claude-Sonnet-4.5 Anthropic 2025/09/29 闭源 N/A 64.8 69.8 59.3 67.2 71.7 56.0
9 GPT-5 OpenAI 2025/08/07 闭源 N/A 63.6 75.0 51.9 64.1 66.6 60.6
10 Qwen3-Next-80B-A3B-Thinking Alibaba 2025/09/15 开源 80B 63.5 76.0 38.2 66.2 77.9 59.4
11 Llama-4-maverick Meta 2025/04/06 开源 400B 57.2 72.1 51.3 63.8 54.4 44.3
12 GPT-4o OpenAI 2025/05/13 闭源 200B 55.9 66.5 46.9 60.9 57.1 47.9

Note: "N/A" in the Parameters column indicates that the model's parameter count has not been publicly disclosed.

Except for DeepSeek-V3.2, Qwen3-235B-A22B-Thinking-2507, Qwen3-Next-80B-A3B-Thinking which are self-tested deployment data, other models reference publicly available leaderboard data.

TCM-5CEval: https://medbench.opencompass.org.cn/track-detail/tcmeval

Downloads last month
21
Safetensors
Model size
80B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PneumaAI/DeepPulse-80B-Thinking-V0.1

Finetuned
(10)
this model