Papers
arxiv:2602.02863

"I May Not Have Articulated Myself Clearly": Diagnosing Dynamic Instability in LLM Reasoning at Inference Time

Published on Feb 2
· Submitted by
Jinkun Chen
on Feb 5
Authors:
,
,
,

Abstract

Analysis of reasoning failures in large language models reveals that instability signals derived from token log probabilities and entropy can predict incorrect answers and distinguish between corrective and destructive instability based on timing of distribution shifts.

AI-generated summary

Reasoning failures in large language models (LLMs) are typically measured only at the end of a generation, yet many failures manifest as a process-level breakdown: the model "loses the thread" mid-reasoning. We study whether such breakdowns are detectable from inference-time observables available in standard APIs (token log probabilities), without any training or fine-tuning. We define a simple instability signal that combines consecutive-step distributional shift (JSD) and uncertainty (entropy), summarize each trace by its peak instability strength, and show that this signal reliably predicts failure. Across GSM8K and HotpotQA, instability strength predicts wrong answers with above-chance AUC and yields monotonic bucket-level accuracy decline at scale across model sizes. Crucially, we show that instability is not uniformly harmful: early instability can reflect subsequent stabilization and a correct final answer (corrective instability), whereas late instability is more often followed by failure (destructive instability), even at comparable peak magnitudes, indicating that recoverability depends not only on how strongly the distribution changes but also on when such changes occur relative to the remaining decoding horizon. The method is model-agnostic, training-free, and reproducible, and is presented as a diagnostic lens rather than a corrective or control mechanism.

Community

Large language models often fail during multi-step reasoning, but the failure is usually only observable at the final answer.

This paper introduces an inference-time, training-free diagnostic signal for identifying dynamic instability during reasoning, using only the observable next-token probability distribution.

We show that instability events reliably predict reasoning failure across models and tasks, and further distinguish between destructive and corrective instability based on timing and recoverability.

The method requires no access to model internals and can be applied as a lightweight, black-box diagnostic during generation.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.02863 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.02863 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.02863 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.