CoPeP
Collection
Continual Pretraining for Protein Language Models
•
2 items
•
Updated
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This dataset is organized for continual-learning experiments on protein sequences.
chandar-lab/CoPePtrain/: 252 parquet shards (data-00000-of-00252.parquet ... data-00251-of-00252.parquet)splits/: 10 task index parquet files (task_0.parquet ... task_9.parquet)val/: validation parquet (val.parquet)task_0.parquet → task_0 (2015)task_1.parquet → task_1 (2016)task_2.parquet → task_2 (2017)task_3.parquet → task_3 (2018)task_4.parquet → task_4 (2019)task_5.parquet → task_5 (2020)task_6.parquet → task_6 (2021)task_7.parquet → task_7 (2022)task_8.parquet → task_8 (2023)task_9.parquet → task_9 (2024)splits/task_*.parquet
The splits/task_*.parquet files are index-style split definitions keyed by row_idx.
They are intended to be joined with records from train/ (or other source files)
using row_idx, rather than treated as standalone full-example datasets.
splits/task_N.parquet file.row_idx to materialize that task's examples.val/val.parquet for evaluation.from datasets import load_dataset
# Replace with your final dataset repo id
repo_id = "chandar-lab/CoPeP"
# 1) Load train split directly
train_ds = load_dataset(repo_id, split="train")
# 2) Load one task index split directly
task0_idx = load_dataset(repo_id, split="task_0")
# 3) Materialize examples by selecting train rows using row_idx
task0_rows = task0_idx["row_idx"]
task0_examples = train_ds.select(task0_rows)
print(task0_examples)
print(task0_examples[0])
Use task_0 as the split name.
split='splits/task_0' is not supported as a split identifier.