MuMo-Pretraining / README.md
zihaojing's picture
Add task category, paper & code links, full abstract and tags (#2)
39964bf verified
metadata
license: mit
task_categories:
  - graph-ml
tags:
  - chemistry
  - molecular-biology
  - drug-discovery
  - multi-modal
dataset_info:
  features:
    - name: edge_index
      list:
        list: int64
    - name: edge_attr
      list:
        list: int64
    - name: x
      list:
        list: int64
    - name: ba_edge_index
      list:
        list: int64
    - name: ba_edge_attr
      list:
        list: float64
    - name: fra_edge_index
      list:
        list: int64
    - name: fra_edge_attr
      list:
        list: int64
    - name: cluster_idx
      list: int64
    - name: bafra_edge_index
      list:
        list: int64
    - name: bafra_edge_attr
      list:
        list: float64
    - name: smiles
      dtype: string
  splits:
    - name: train
      num_bytes: 17772414767
      num_examples: 1551232
    - name: validation
      num_bytes: 454862268
      num_examples: 39775
  download_size: 1889271320
  dataset_size: 18227277035
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

MuMo Pretraining Dataset

Abstract

Multimodal molecular models often suffer from 3D conformer unreliability and modality collapse, limiting their robustness and generalization. We propose MuMo, a structured multimodal fusion framework that addresses these challenges in molecular representation through two key strategies. To reduce the instability of conformer-dependent fusion, we design a Structured Fusion Pipeline (SFP) that combines 2D topology and 3D geometry into a unified and stable structural prior. To mitigate modality collapse caused by naive fusion, we introduce a Progressive Injection (PI) mechanism that asymmetrically integrates this prior into the sequence stream, preserving modality-specific modeling while enabling cross-modal enrichment. Built on a state space backbone, MuMo supports long-range dependency modeling and robust information propagation. Across 29 benchmark tasks from Therapeutics Data Commons (TDC) and MoleculeNet, MuMo achieves an average improvement of 2.7% over the best-performing baseline on each task, ranking first on 22 of them, including a 27% improvement on the LD50 task. These results validate its robustness to 3D conformer noise and the effectiveness of multimodal fusion in molecular representation. The code is available at: this http URL .

Dataset Overview

  • Source: filtered ChEMBL (~1.6M molecules)
  • Purpose: language-style pretraining over SMILES with graph/geometry supervision
  • Processing: generated using preprocess/mol3d_processor.py
  • Splits: train (≈1.55M), validation (≈39.8K)

You can load this dataset directly via the Hugging Face Datasets API or via our training scripts with --dataset_name.

Data Schema (per example)

  • smiles (string): canonical SMILES string
  • Graph keys (2D topology and basic chemistry):
    • x: node feature matrix (list of lists)
    • edge_index: 2×E edge indices (list of two lists of int)
    • edge_attr: edge feature matrix (list of lists)
  • Fragment-level keys (BRICS-based):
    • fra_edge_index: fragment connectivity indices (list of lists of int)
    • fra_edge_attr: fragment edge features (list of lists)
  • Geometry-level keys:
    • ba_edge_index: geometry-based connections (list of lists of int)
    • ba_edge_attr: features for geometry connections (list of lists)
  • Geometry–fragment keys:
    • bafra_edge_index: geometry fragment connectivity (list of lists of int)
    • bafra_edge_attr: features for geometry fragments (list of lists)
  • cluster_idx (list of int): fragment membership index per atom (which fragment each atom belongs to)

Notes:

  • Shapes and dtypes may be adapted by downstream collators; values are stored as lists for portability.
  • All lists are serialized for JSONL storage and converted to tensors during training.

Usage

Python (Datasets):

from datasets import load_dataset

ds = load_dataset("zihaojing/MuMo-Pretraining")
print(ds)
example = ds["train"][0]
print(example.keys())

Training script (Transformers):

deepspeed train/pretrain.py \
  --dataset_name zihaojing/MuMo-Pretraining \
  --do_train --do_eval \
  ...

Processing Pipeline

We use preprocess/mol3d_processor.py to derive graph and geometry features from SMILES:

  • Atom features, bonds, and 2D topology populate x, edge_index, edge_attr.
  • BRICS-based fragmentation provides fra_edge_index, fra_edge_attr, and cluster_idx.
  • Geometry connections and fragment geometry provide ba_edge_index, ba_edge_attr, bafra_edge_index, bafra_edge_attr.

Citation

If you find this work useful, please cite:

Zihao Jing, Yan Sun, Yanyi Li, Sugitha Janarthanan, Alana Deng, and Pingzhao Hu. "MuMo: Multimodal Molecular Representation Learning via Structural Fusion and Progressive Injection." In Advances in Neural Information Processing Systems (NeurIPS), 2025. (paper)

@inproceedings{jing2025mumo,
  title        = {MuMo: Multimodal Molecular Representation Learning via Structural Fusion and Progressive Injection},
  author       = {Jing, Zihao and Sun, Yan and Li, Yan Yi and Janarthanan, Sugitha and Deng, Alana and Hu, Pingzhao},
  booktitle    = {Advances in Neural Information Processing Systems (NeurIPS)},
  year         = {2025}
}

License

MIT