CoPeP Continual Learning Checkpoints

This repository contains 90 checkpoints from continual learning experiments with the AMPLIFY protein language model (120M parameters).

Loading a checkpoint

from transformers import AutoModel

model = AutoModel.from_pretrained(
    "chandar-lab/copep-checkpoints",
    subfolder="replay/task_5",
    trust_remote_code=True,
)

Available checkpoints

Method Tasks
continual task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
gradient_ascent task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
hare_tortoise task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
joint task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
match task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
random_labels task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
replay task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
shrink_perturb task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9
single_year task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9

Each task_N subfolder contains a config.json and model.safetensors.

Task mapping

  • task_0 : pre-2004 (base model)
  • task_1 – task_9 : successive temporal splits of UniRef data

For methods that start from task_1 (continual, gradient_ascent, match, random_labels, replay, shrink_perturb), task_0 is the same checkpoint as single_year/task_0 (the base pre-trained model).

Model architecture

  • Architecture: Transformer encoder with RoPE + SwiGLU
  • Parameters: ~120M
  • Config: hidden_size=640, num_hidden_layers=24, num_attention_heads=10, intermediate_size=2560
  • Vocab size: 32 (amino acid tokens + special tokens)
  • Max length: 512 (training), 50000 (inference with RoPE extrapolation)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including chandar-lab/copep-checkpoints