Update README.md
Browse files
README.md
CHANGED
|
@@ -1,18 +1,18 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
-
language:
|
| 4 |
-
- en
|
| 5 |
-
pipeline_tag: object-detection
|
| 6 |
-
library_name: mmdetection
|
| 7 |
-
---
|
| 8 |
-
## Introduction
|
| 9 |
-
We introduce a real-world aerial view dataset, UGRC, captured in Utah (USA). The dataset has ground sampling distance (GSD) of 12.5 cm per px and
|
| 10 |
-
|
| 11 |
-
## Model Usage
|
| 12 |
-
This folder contains four detectors trained on Real UGRC data and tested on Real UGRC data, along with configuration files we use for training and testing.
|
| 13 |
-
|
| 14 |
-
## References
|
| 15 |
-
|
| 16 |
-
➡️ **Paper:** [Adapting Vehicle Detectors for Aerial Imagery to Unseen Domains with Weak Supervision](https://arxiv.org/abs/2507.20976)
|
| 17 |
-
➡️ **Project Page:** [Webpage](https://humansensinglab.github.io/AGenDA/)
|
| 18 |
-
➡️ **Data:** [AGenDA](https://github.com/humansensinglab/AGenDA/tree/main/Data)
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
pipeline_tag: object-detection
|
| 6 |
+
library_name: mmdetection
|
| 7 |
+
---
|
| 8 |
+
## Introduction
|
| 9 |
+
We introduce a real-world aerial view dataset, UGRC, captured in Utah (USA). The dataset has ground sampling distance (GSD) of 12.5 cm per px and has been sampled to 112 px × 112 px image size. For data annotation, we label only the small vehicle centers. To leverage the abundance of bounding box-based open-source object detection frameworks, we define a fixed-size ground truth bounding box of 42.36 px × 42.36 px centered at each vehicle. Annotations are provided in COCO format [x, y, w, h], where "small" in the annotation json files denotes the small vehicle class and (x, y) denotes the top-left corner of the bounding box. We use AP50 as the evaluation metric.
|
| 10 |
+
|
| 11 |
+
## Model Usage
|
| 12 |
+
This folder contains four detectors trained on Real UGRC data and tested on Real UGRC data, along with configuration files we use for training and testing.
|
| 13 |
+
|
| 14 |
+
## References
|
| 15 |
+
|
| 16 |
+
➡️ **Paper:** [Adapting Vehicle Detectors for Aerial Imagery to Unseen Domains with Weak Supervision](https://arxiv.org/abs/2507.20976)
|
| 17 |
+
➡️ **Project Page:** [Webpage](https://humansensinglab.github.io/AGenDA/)
|
| 18 |
+
➡️ **Data:** [AGenDA](https://github.com/humansensinglab/AGenDA/tree/main/Data)
|