Unable to run the script due to missing key in CONFIG
#1
by
Manojkl
- opened
Hi,
The code provided in the description doesn't work. Also tried to run in Google Colab. Getting the below error.
Date: 14/06/2025
Time: 11:47 AM IST
# Load model directly
from transformers import AutoTokenizer, AutoModelForVideoClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/vjepa2-vitl-fpc32-256-diving48")
model = AutoModelForVideoClassification.from_pretrained("facebook/vjepa2-vitl-fpc32-256-diving48")
Output
/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning:
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
warnings.warn(
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1169 try:
-> 1170 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1171 except KeyError:
3 frames
KeyError: 'vjepa2'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1170 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1171 except KeyError:
-> 1172 raise ValueError(
1173 f"The checkpoint you are trying to load has model type `{config_dict['model_type']}` "
1174 "but Transformers does not recognize this architecture. This could be because of an "
ValueError: The checkpoint you are trying to load has model type `vjepa2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
config.json:β100%
β4.25k/4.25kβ[00:00<00:00,β115kB/s]
koustuvs
changed discussion status to
closed