OVO-Bench / README.md
JoeLeelyf's picture
Update README.md
b8c2484 verified
|
raw
history blame
1.22 kB
---
license: cc-by-sa-4.0
language:
- en
task_categories:
- video-text-to-text
size_categories:
- 1K<n<10K
---
<h1 align="center">
<font color=#0088cc>O</font><font color=#060270>V</font><font color=#0088cc>O</font>-Bench: How Far is Your Video-LLMs from Real-World <font color=#0088cc>O</font>nline <font color=#060270>V</font>ide<b style="color: #0088cc;">O</b> Understanding?
</h1>
<p align="center">
πŸ”₯πŸ”₯OVO-Bench is accepted by CVPR 2025!πŸ”₯πŸ”₯
</p>
<p align="center">
<a href="https://arxiv.org/abs/2501.05510" style="margin-right: 10px;">
<img src="https://img.shields.io/badge/arXiv-2501.05510-b31b1b.svg?logo=arXiv">
</a>
</p>
> **Important Note:** Current codebase is modified compared to our initial arXiv paper. We strongly recommend that any use of OVO-Bench should be based on current edition.
## Introduction
### 🌟 Three distinct problem-solving modes
- **Backward Tracing**: trace back to past events to answer the question.
- **Real-Time Visual Perception**: understand and respond to events as they unfold at the current timestamp.
- **Forward Active Responding**: delay the response until sufficient future information becomes available to answer the question accurately.