Enhance dataset card for NautData: Add metadata, links, description, tasks, and sample usage

#1
by nielsr HF Staff - opened

This pull request significantly enhances the dataset card for NautData by:

  • Adding relevant task_categories: image-classification, object-detection, image-to-text, question-answering, image-text-to-text.
  • Including language: en and descriptive tags: underwater, multimodal, LMM, instruction-following, scene-understanding.
  • Adding size_categories: 1M<n<10M to reflect the dataset's scale.
  • Providing a comprehensive description of the dataset, its purpose, and the tasks it supports, clarifying that this repository (Wang017/NautData) holds the images and linking to Wang017/NautData-Instruct for annotations.
  • Including direct links to the associated paper (NAUTILUS: A Large Multimodal Model for Underwater Scene Understanding), the project page (https://h-embodvis.github.io/NAUTILUS/), and the GitHub repository (https://github.com/H-EmbodVis/NAUTILUS).
  • Adding a "Sample Usage" section with practical Python code snippets for local inference, directly extracted from the project's GitHub README.
  • Including the official BibTeX citation for proper attribution.

This makes the dataset much more informative and easier to discover and use for researchers.

H-EmbodVis org

Thank you so much for this fantastic contribution!
The enhanced dataset card looks great — really appreciate the effort in enriching it with detailed metadata, task categories, links, sample usage, and the citation. This will make the dataset much more discoverable and helpful for the community, especially for researchers working on underwater multimodal tasks.
Thanks again for taking the time to improve the project.

Wang017 changed pull request status to merged

Sign up or log in to comment