VisMem: Latent Vision Memory Unlocks Potential of Vision-Language Models
Abstract
VisMem enhances Vision-Language Models by incorporating dynamic latent vision memories, improving performance on complex visual tasks through perceptual fidelity and semantic consistency.
Despite the remarkable success of Vision-Language Models (VLMs), their performance on a range of complex visual tasks is often hindered by a "visual processing bottleneck": a propensity to lose grounding in visual evidence and exhibit a deficit in contextualized visual experience during prolonged generation. Drawing inspiration from human cognitive memory theory, which distinguishes short-term visually-dominant memory and long-term semantically-dominant memory, we propose VisMem, a cognitively-aligned framework that equips VLMs with dynamic latent vision memories, a short-term module for fine-grained perceptual retention and a long-term module for abstract semantic consolidation. These memories are seamlessly invoked during inference, allowing VLMs to maintain both perceptual fidelity and semantic consistency across thinking and generation. Extensive experiments across diverse visual benchmarks for understanding, reasoning, and generation reveal that VisMem delivers a significant average performance boost of 11.8% relative to the vanilla model and outperforms all counterparts, establishing a new paradigm for latent-space memory enhancement. The code will be available: https://github.com/YU-deep/VisMem.git.
Community
VisMem: Latent Vision Memory Unlocks Potential of Vision-Language Models
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Bridging Hidden States in Vision-Language Models (2025)
- Rethinking Visual Information Processing in Multimodal LLMs (2025)
- Causally-Grounded Dual-Path Attention Intervention for Object Hallucination Mitigation in LVLMs (2025)
- PROPA: Toward Process-level Optimization in Visual Reasoning via Reinforcement Learning (2025)
- VLURes: Benchmarking VLM Visual and Linguistic Understanding in Low-Resource Languages (2025)
- CoCoVa: Chain of Continuous Vision-Language Thought for Latent Space Reasoning (2025)
- Visual Jigsaw Post-Training Improves MLLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper