C^2DLM: Causal Concept-Guided Diffusion Large Language Models
Abstract
A Causal Concept-Guided Diffusion Language Model (C2DLM) improves reasoning capabilities by learning causal relationships between concepts, enhancing performance and training efficiency in downstream tasks.
Autoregressive (AR) language models and Diffusion Language Models (DLMs) constitute the two principal paradigms of large language models. However, both paradigms suffer from insufficient reasoning capabilities. Human reasoning inherently relies on causal knowledge and thought, which are reflected in natural language. But in the AR paradigm, language is modeled as next token prediction (a strictly left-to-right, token-by-token order), whereas natural language itself exhibits more flexible causal structures. In the DLM paradigm, the attention mechanism is fully connected, which entirely disregards causal order. To fill this gap, we propose a \textbf{C}ausal \textbf{C}oncept-Guided \textbf{D}iffusion \textbf{L}anguage \textbf{M}odel (C^2DLM). Starting from DLM's fully connected attention, C^2DLM first obtains a concept-level causal graph from the teacher model, and then explicitly guides attention to learn causal relationships between concepts. By focusing on causal relationships and avoiding interference from difficult subgoals involving causal inversion, C^2DLM improves 12\% with about 3.2 times training speedup in the COT-OrderPerturb task, and achieves an average gain of 1.31\% across six downstream reasoning tasks. More details in the repository ~https://github.com/Kairong-Han/C-2-DLM{here}.
Community
C$^2$DLM: Causal Concept-Guided Diffusion Large Language Models
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SMRC: Aligning Large Language Models with Student Reasoning for Mathematical Error Correction (2025)
- MRO: Enhancing Reasoning in Diffusion Language Models via Multi-Reward Optimization (2025)
- Large Language Model Enhanced Graph Invariant Contrastive Learning for Out-of-Distribution Recommendation (2025)
- Unlocking the Potential of Diffusion Language Models through Template Infilling (2025)
- On Powerful Ways to Generate: Autoregression, Diffusion, and Beyond (2025)
- DART: Difficulty-Adaptive Reasoning Truncation for Efficient Large Language Models (2025)
- STaR: Towards Cognitive Table Reasoning via Slow-Thinking Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper