-
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks
Paper • 2410.10563 • Published • 38 -
Latent Action Pretraining from Videos
Paper • 2410.11758 • Published • 3 -
TVBench: Redesigning Video-Language Evaluation
Paper • 2410.07752 • Published • 6 -
Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation
Paper • 2501.03225 • Published • 7
Collections
Discover the best community collections!
Collections including paper arxiv:2503.06749
-
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
Paper • 2312.08578 • Published • 20 -
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Paper • 2312.08583 • Published • 11 -
Vision-Language Models as a Source of Rewards
Paper • 2312.09187 • Published • 14 -
StemGen: A music generation model that listens
Paper • 2312.08723 • Published • 49
-
BLINK: Multimodal Large Language Models Can See but Not Perceive
Paper • 2404.12390 • Published • 26 -
TextSquare: Scaling up Text-Centric Visual Instruction Tuning
Paper • 2404.12803 • Published • 30 -
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models
Paper • 2404.13013 • Published • 31 -
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Paper • 2404.06512 • Published • 30
-
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks
Paper • 2410.10563 • Published • 38 -
Latent Action Pretraining from Videos
Paper • 2410.11758 • Published • 3 -
TVBench: Redesigning Video-Language Evaluation
Paper • 2410.07752 • Published • 6 -
Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation
Paper • 2501.03225 • Published • 7
-
BLINK: Multimodal Large Language Models Can See but Not Perceive
Paper • 2404.12390 • Published • 26 -
TextSquare: Scaling up Text-Centric Visual Instruction Tuning
Paper • 2404.12803 • Published • 30 -
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models
Paper • 2404.13013 • Published • 31 -
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Paper • 2404.06512 • Published • 30
-
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
Paper • 2312.08578 • Published • 20 -
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Paper • 2312.08583 • Published • 11 -
Vision-Language Models as a Source of Rewards
Paper • 2312.09187 • Published • 14 -
StemGen: A music generation model that listens
Paper • 2312.08723 • Published • 49