Datasets:

ArXiv:
License:
AI4Industry commited on
Commit
391d312
·
verified ·
1 Parent(s): 6f7c64b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -1,3 +1,100 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - object-detection
5
+ tags:
6
+ - chemistry
7
+ ---
8
+
9
+ # Molecule Detection Benchmark Collection
10
+
11
+ ## 📑 Task 1: Multi-scale Chemical Structure Detection
12
+
13
+ Localizing all molecular structures within an image is a fundamental prerequisite for chemical structure recognition. Since user-provided images may vary widely in scale—ranging from a single molecule, to multiple molecules, to an entire PDF page—molecular detection algorithms must handle extreme scale variability. To evaluate this capability, we construct **MolDet-Bench-General**, a benchmark designed to assess multi-scale molecular localization performance.
14
+
15
+ ### MolDet-Bench-General
16
+
17
+ MolDet-Bench-General encompasses a diverse set of molecular detection tasks across multiple domains and scales. It includes single-molecule localization (i.e., determining whether an image contains a molecule), multi-molecule detection, molecular localization within reaction schemes and tables, handwritten molecule detection, and molecule localization on PDF pages. This benchmark therefore provides a comprehensive evaluation of detection performance under varied and challenging real-world conditions.
18
+
19
+ It is important to note that, to better accommodate multi-scale molecular detection and to avoid incomplete molecule crops, the benchmark does not use tight, edge-aligned bounding boxes. Instead, each molecular bounding box is intentionally expanded with an adaptive margin, ensuring that downstream recognition models are not adversely affected by truncated molecular structures.
20
+
21
+ todo: example
22
+
23
+
24
+ ## 📑 Task 2: Chemical Structure Detection in Documents
25
+
26
+ Accurately localizing molecular structures in chemistry and biology literature and patents is a critically important task. We introduce **MolDet-Bench-Doc**, a benchmark for molecular localization within document pages, and additionally incorporate the third-party **BioVista** benchmark as part of our evaluation suite.
27
+
28
+ ### MolDet-Bench-Doc
29
+
30
+ We build upon the patent and scientific PDF corpus introduced in the Uni-Parser Benchmark (introduced in *Uni-Parser Technical Report*), which comprises 50 patents from 20 patent offices and 100 journal articles from multiple open-access sources. From this collection, we extract all pages containing molecular structures, yielding 447 pages with a total of 2,178 molecules. Molecular localization is annotated using edge-aligned (tight) bounding boxes. This subset forms MolDet-Bench-Doc, a benchmark designed to evaluate molecular localization performance directly on PDF document pages.
31
+
32
+ todo: example
33
+
34
+ ### BioVista
35
+
36
+ We also processed the BioVista molecular object detection benchmark (introduced in *BioMiner: A Multi-modal System for Automated Mining of Protein-Ligand Bioactivity Data from Literature*) and converted its annotations into the YOLO format. The benchmark covers 500 papers in the domains of protein biology and related fields, containing a total of 11,212 molecular instances. However, it is important to note that the BioVista annotation guidelines differ from ours for molecular localization, and the dataset contains some instances of missing or incomplete molecular annotations.
37
+
38
+ todo: example
39
+
40
+ ## 🎯 Benchmark Evaluation
41
+
42
+ All benchmark datasets are organized in the Ultralytics YOLO format, and we use `ultralytics` to evaluate *mAP50* and *mAP50-95*.
43
+
44
+ You may install the library via:
45
+
46
+ ```bash
47
+ pip install ultralytics
48
+ ```
49
+
50
+ Example evaluation code:
51
+
52
+ ```python
53
+ from ultralytics import YOLO
54
+
55
+ model = YOLO("/your/path/to/yolo_weights.pt")
56
+
57
+ metrics = model.val(
58
+ data="./path/to/benchmark/dataset.yaml", # Path to the benchmark dataset YAML
59
+ imgsz=640, # Inference resolution (e.g., 640 for multi-scale MolDet-General models, 960 for document-optimized MolDet-Doc models)
60
+ split="val",
61
+ classes=[0]
62
+ )
63
+ ```
64
+
65
+ For further usage instructions, please refer to the [official Ultralytics documentation](https://docs.ultralytics.com/modes/val/).
66
+
67
+
68
+ ## 📊 Benchmark Leaderboard
69
+
70
+
71
+
72
+
73
+ ## 📖 Citation
74
+
75
+ If you use this benchmark in your work, please cite:
76
+
77
+ MolDet-Bench-General:
78
+ ```
79
+
80
+ ```
81
+
82
+ MolDet-Bench-Doc:
83
+ ```
84
+
85
+ ```
86
+
87
+ BioVista Benchmark:
88
+ ```
89
+
90
+ ```
91
+
92
+ MolDet Model:
93
+ ```
94
+
95
+ ```
96
+
97
+ MolDetv2 Moldel:
98
+ ```
99
+
100
+ ```