Video-Text-to-Text

Improve model card for MovieCORE dataset

#1
by nielsr HF Staff - opened

This PR significantly improves and corrects the model card for the MovieCORE repository.

Key changes include:

  • Corrected Title: Updated the title to "MovieCORE: COgnitive REasoning in Movies" to accurately reflect the artifact.
  • Updated Metadata:
    • Added pipeline_tag: video-text-to-text for better discoverability of this Video Question Answering (VQA) related artifact.
  • Comprehensive Content: The entire content has been rewritten and expanded, pulling detailed information directly from the MovieCORE GitHub repository, including:
    • An overview of the dataset's purpose and scope.
    • Detailed sections on data preparation, installation, baselines, and evaluation dimensions.
    • A "Usage" section with an evaluation script code snippet and input/output formats, as provided in the GitHub README.
    • The BibTeX citation for the paper.
    • Sections on contributing and license.
  • Corrected Links: All links (paper, project page, GitHub, Hugging Face Dataset) have been updated to point to the correct MovieCORE resources, replacing previous incorrect links.
  • Enhanced Presentation: Incorporated badges and images from the GitHub README for improved readability and visual appeal.

This update provides a clear, comprehensive, and accurate representation of the MovieCORE artifact on the Hugging Face Hub.

Joslefaure changed pull request status to merged

Sign up or log in to comment