File size: 7,772 Bytes
391d312 aff6f33 391d312 c3f421e 391d312 c3f421e 391d312 c3f421e 391d312 0646e3a 391d312 c3f421e 391d312 684be25 391d312 684be25 8e05881 684be25 8e05881 684be25 391d312 684be25 391d312 0ec9188 391d312 1c85ff3 391d312 5a93cc1 391d312 1c85ff3 391d312 5a93cc1 391d312 6b605fe 1c85ff3 391d312 5a93cc1 ac33b13 5a93cc1 ac33b13 5a93cc1 391d312 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
license: cc-by-nc-sa-4.0
task_categories:
- object-detection
tags:
- chemistry
viewer: false
---
# Molecule Detection Benchmark Collection
## 📑 Task 1: Multi-scale Chemical Structure Detection
Localizing all molecular structures within an image is a fundamental prerequisite for chemical structure recognition. Since user-provided images may vary widely in scale—ranging from a single molecule, to multiple molecules, to an entire PDF page—molecular detection algorithms must handle extreme scale variability. To evaluate this capability, we construct **MolDet-Bench-General**, a benchmark designed to assess multi-scale molecular localization performance.
### MolDet-Bench-General
MolDet-Bench-General encompasses a diverse set of molecular detection tasks across multiple domains and scales. It includes single-molecule localization (i.e., determining whether an image contains a molecule), multi-molecule detection, molecular localization within reaction schemes and tables, handwritten molecule detection, and molecule localization on PDF pages (a total of 799 images). This benchmark therefore provides a comprehensive evaluation of detection performance under varied and challenging real-world conditions.
It is important to note that, to better accommodate multi-scale molecular detection and to avoid incomplete molecule crops, the benchmark does not use tight, edge-aligned bounding boxes. Instead, each molecular bounding box is intentionally expanded with an adaptive margin, ensuring that downstream recognition models are not adversely affected by truncated molecular structures.

## 📑 Task 2: Chemical Structure Detection in Documents
Accurately localizing molecular structures in chemistry and biology literature and patents is a critically important task. We introduce **MolDet-Bench-Doc**, a benchmark for molecular localization within document pages, and additionally incorporate the third-party **BioVista** benchmark as part of our evaluation suite.
### MolDet-Bench-Doc
We build upon the patent and scientific PDF corpus introduced in the Uni-Parser Benchmark (introduced in *Uni-Parser Technical Report*), which comprises 50 patents from 20 patent offices and 100 journal articles from multiple open-access sources. From this collection, we extract all pages containing molecular structures, yielding 447 pages with a total of 2,178 molecules. Molecular localization is annotated using edge-aligned (tight) bounding boxes. This subset forms MolDet-Bench-Doc, a benchmark designed to evaluate molecular localization performance directly on PDF document pages.

### BioVista
We also processed the BioVista molecular object detection benchmark (introduced in *BioMiner: A Multi-modal System for Automated Mining of Protein-Ligand Bioactivity Data from Literature*) and converted its annotations into the YOLO format. The benchmark covers 500 papers in the domains of protein biology and related fields, containing a total of 11,212 molecular instances. However, it is important to note that the BioVista annotation guidelines differ from ours for molecular localization, and the dataset contains some instances of missing or incomplete molecular annotations. (Raw dataset from: https://github.com/jiaxianyan/BioMiner)

## 🎯 Benchmark Evaluation
All benchmark datasets are organized in the Ultralytics YOLO format, and we use `ultralytics` to evaluate *mAP50* and *mAP50-95*.
You may install the library via:
```bash
pip install ultralytics
```
Example evaluation code:
```python
from ultralytics import YOLO
model = YOLO("/your/path/to/yolo_weights.pt")
metrics = model.val(
data="./path/to/benchmark/dataset.yaml", # Path to the benchmark dataset YAML
imgsz=640, # Inference resolution (e.g., 640 for multi-scale MolDet-General models, 960 for document-optimized MolDet-Doc models)
split="val",
classes=[0]
)
print("mAP50:", metrics.box.map50)
print("mAP50-95:", metrics.box.map)
```
For further usage instructions, please refer to the [official Ultralytics documentation](https://docs.ultralytics.com/modes/val/).
## 📊 Benchmark Leaderboard
### MolDet-Bench-General
| Model | mAP50 | mAP50-95 | Speed (T4 TensorRT10) |
| ---- | ----- | -------- | ----- |
| MolDetv2-General-n | **0.9872** | **0.8776** | 1.5 ± 0.0 ms |
| MolDet-General-l | 0.9675 | 0.8349 | 6.2 ± 0.1 ms |
| MolDet-General-m | 0.9702 | 0.8269 | 4.7 ± 0.1 ms |
| MolDet-General-s | 0.9685 | 0.8260 | 2.5 ± 0.1 ms |
| MolDet-General-n | 0.9574 | 0.8052 | 1.5 ± 0.0 ms |
### MolDet-Bench-Doc
| Model | mAP50 | mAP50-95 | Speed (T4 TensorRT10) |
| ---- | ----- | -------- | ----- |
| MolDetv2-Doc-n | **0.9936** | 0.9544 | 3.1 ± 0.0 ms |
| Uni-Parser-LD | 0.9935 | **0.9679** | 10.1 ± 0.2 ms |
| MolDet-Doc-s | 0.9927 | 0.9531 | 5.5 ± 0.1 ms |
| MolDet-Doc-l | 0.9926 | 0.9367 | 13.1 ± 0.3 ms |
| MolDet-General-l | 0.9921 | 0.8251 | 6.2 ± 0.1 ms |
| MolDet-General-m | 0.9921 | 0.8063 | 4.7 ± 0.1 ms |
| MolDet-Doc-n | 0.9913 | 0.9555 | 3.1 ± 0.0 ms |
| MolDet-Doc-m | 0.9913 | 0.9539 | 9.9 ± 0.2 ms |
| MolDetv2-General-n | 0.9908 | 0.8003 | 1.5 ± 0.0 ms |
| MolDet-General-s | 0.9878 | 0.8535 | 2.5 ± 0.1 ms |
| MolDet-General-n | 0.9836 | 0.8093 | 1.5 ± 0.0 ms |
### BioVista
| Model | mAP50 | Speed (T4 TensorRT10) |
| ---- | ----- | ----- |
| Uni-Parser-LD | **0.9806** | 10.1 ± 0.2 ms |
| MolDetv2-Doc-n | 0.9748 | 3.1 ± 0.0 ms |
| MolDetv2-General-n | 0.9609 | 2.5 ± 0.0 ms |
| MolDet-Doc-l | 0.9607 | 13.1 ± 0.3 ms |
| MolDet-Doc-m | 0.9558 | 9.9 ± 0.2 ms |
| MolDet-General-m | 0.9460 | 4.7 ± 0.1 ms |
| MolDet-General-l | 0.9447 | 6.2 ± 0.1 ms |
| MolDet-Doc-s | 0.9416 | 5.5 ± 0.1 ms |
| MolDet-Doc-n | 0.9391 | 3.1 ± 0.0 ms |
| MolDet-General-s | 0.9318 | 2.5 ± 0.1 ms |
| BioMiner | 0.9290 | - |
| MolDet-General-n | 0.9258 | 1.5 ± 0.0 ms |
| MolMiner | 0.8990 | - |

## 📖 Citation
If you use this benchmark in your work, please cite:
- MolDet-Bench & MolDetv2 Moldel
```
Comming Soon!
```
- MolDet Model
```
@inproceedings{fang2025molparser,
title={Molparser: End-to-end visual recognition of molecule structures in the wild},
author={Fang, Xi and Wang, Jiankun and Cai, Xiaochen and Chen, Shangqian and Yang, Shuwen and Tao, Haoyi and Wang, Nan and Yao, Lin and Zhang, Linfeng and Ke, Guolin},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={24528--24538},
year={2025}
}
```
- Uni-Parser-LD Model
```
@misc{fang2025uniparsertechnicalreport,
title={Uni-Parser Technical Report},
author={Xi Fang and Haoyi Tao and Shuwen Yang and Suyang Zhong and Haocheng Lu and Han Lyu and Chaozheng Huang and Xinyu Li and Linfeng Zhang and Guolin Ke},
year={2025},
eprint={2512.15098},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.15098},
}
```
- BioVista Benchmark
```
@article {Yan2025.04.22.648951,
title = {BioMiner: A Multi-modal System for Automated Mining of Protein-Ligand Bioactivity Data from Literature},
author = {Yan, Jiaxian and Zhu, Jintao and Yang, Yuhang and Liu, Qi and Zhang, Kai and Zhang, Zaixi and Liu, Xukai and Zhang, Boyan and Gao, Kaiyuan and Xiao, Jinchuan and Chen, Enhong},
doi = {10.1101/2025.04.22.648951},
journal = {bioRxiv},
year = {2025}
}
``` |