CoPeP
Collection
Continual Pretraining for Protein Language Models
•
2 items
•
Updated
This repository contains 90 checkpoints from continual learning experiments with the AMPLIFY protein language model (120M parameters).
from transformers import AutoModel
model = AutoModel.from_pretrained(
"chandar-lab/copep-checkpoints",
subfolder="replay/task_5",
trust_remote_code=True,
)
| Method | Tasks |
|---|---|
continual |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
gradient_ascent |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
hare_tortoise |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
joint |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
match |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
random_labels |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
replay |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
shrink_perturb |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
single_year |
task_0, task_1, task_2, task_3, task_4, task_5, task_6, task_7, task_8, task_9 |
Each task_N subfolder contains a config.json and model.safetensors.
For methods that start from task_1 (continual, gradient_ascent, match,
random_labels, replay, shrink_perturb), task_0 is the same checkpoint as
single_year/task_0 (the base pre-trained model).