Align-Then-stEer Collection Open-sourced models of our paper "Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance" • 3 items • Updated 6 days ago
Align-Then-stEer Collection Open-sourced models of our paper "Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance" • 3 items • Updated 6 days ago
Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration Paper • 2405.14314 • Published May 23, 2024 • 1
Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance Paper • 2509.02055 • Published Sep 2 • 1
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach Paper • 2512.02834 • Published 12 days ago • 39 • 3
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach Paper • 2512.02834 • Published 12 days ago • 39
Align-Then-stEer: Adapting the Vision-Language Action Models through Unified Latent Guidance Paper • 2509.02055 • Published Sep 2 • 1
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach Paper • 2512.02834 • Published 12 days ago • 39 • 3
VLA-TTS: TACO Collection Models in "Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach". Credits to Rhodes Team @ TeleAI. • 4 items • Updated 10 days ago • 2
Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach Paper • 2512.02834 • Published 12 days ago • 39
VLA-TTS: TACO Collection Models in "Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach". Credits to Rhodes Team @ TeleAI. • 4 items • Updated 10 days ago • 2
Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration Paper • 2405.14314 • Published May 23, 2024 • 1
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics Paper • 2506.01844 • Published Jun 2 • 143