|
|
--- |
|
|
license: cc-by-nc-sa-4.0 |
|
|
task_categories: |
|
|
- visual-question-answering |
|
|
- multiple-choice |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- Remote Sensing |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
# 🐙GitHub |
|
|
Information or evaluatation on this dataset can be found in this repo: **https://github.com/AI9Stars/XLRS-Bench** |
|
|
|
|
|
# 📜Dataset License |
|
|
|
|
|
Annotations of this dataset is released under a [Creative Commons Attribution-NonCommercial 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0). For images from: |
|
|
|
|
|
- **[DOTA](https://captain-whu.github.io/DOTA)** |
|
|
RGB images from Google Earth and CycloMedia (for academic use only; commercial use is prohibited, and Google Earth terms of use apply). |
|
|
|
|
|
- **[ITCVD](https://phys-techsciences.datastations.nl/dataset.xhtml?persistentId=doi:10.17026/dans-xnc-h2fu)** |
|
|
Licensed under [CC-BY-NC-SA-4.0](http://creativecommons.org/licenses/by-nc-sa/4.0). |
|
|
|
|
|
- **[MiniFrance](https://ieee-dataport.org/open-access/minifrance), [HRSCD](https://ieee-dataport.org/open-access/hrscd-high-resolution-semantic-change-detection-dataset)** |
|
|
Released under [IGN’s "licence ouverte"](https://web.archive.org/web/20200717042533/http://www.ign.fr/institut/activites/lign-lopen-data). |
|
|
|
|
|
- **[Toronto, Potsdam](https://www.isprs.org/education/benchmarks/UrbanSemLab/default.aspx):** |
|
|
The Toronto test data images are derived from the Downtown Toronto dataset provided by Optech Inc., First Base Solutions Inc., GeoICT Lab at York University, and ISPRS WG III/4, and are subject to the following conditions: |
|
|
1. The data must not be used for other than research purposes. Any other use is prohibited. |
|
|
2. The data must not be used outside the context of this test project, in particular while the project is still on-going (i.e. until September 2012). Whether the data will be available for other research purposes after the end of this project is still under discussion. |
|
|
3. The data must not be distributed to third parties. Any person interested in the data may obtain them via ISPRS WG III/4. |
|
|
4. The data users should include the following acknowledgement in any publication resulting from the datasets: |
|
|
“*The authors would like to acknowledge the provision of the Downtown Toronto data set by Optech Inc., First Base Solutions Inc., GeoICT Lab at York University, and ISPRS WG III/4.*” |
|
|
|
|
|
**Disclaimer:** |
|
|
If any party believes their rights are infringed, please contact us immediately at **[wfx23@nudt.edu.cn](mailto:wfx23@nudt.edu.cn)**. We will promptly remove any infringing content. |
|
|
|
|
|
|
|
|
# 📖Citation |
|
|
|
|
|
If you find our work helpful, please consider citing: |
|
|
|
|
|
```tex |
|
|
@inproceedings{wang2025xlrs, |
|
|
title={Xlrs-bench: Could your multimodal llms understand extremely large ultra-high-resolution remote sensing imagery?}, |
|
|
author={Wang, Fengxiang and Wang, Hongzhen and Guo, Zonghao and Wang, Di and Wang, Yulin and Chen, Mingshuo and Ma, Qiang and Lan, Long and Yang, Wenjing and Zhang, Jing and others}, |
|
|
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference}, |
|
|
pages={14325--14336}, |
|
|
year={2025} |
|
|
} |
|
|
|
|
|
@article{wang2025geollava, |
|
|
title={GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K Resolution}, |
|
|
author={Wang, Fengxiang and Chen, Mingshuo and Li, Yueying and Wang, Di and Wang, Haotian and Guo, Zonghao and Wang, Zefan and Shan, Boqi and Lan, Long and Wang, Yulin and others}, |
|
|
journal={arXiv preprint arXiv:2505.21375}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|