The error happened when using nanonets/Nanonets-OCR2-1.5B-exp?
ValueError Traceback (most recent call last)
/tmp/ipython-input-1181996783.py in <cell line: 0>()
2 from transformers import pipeline
3
----> 4 pipe = pipeline("image-text-to-text", model="nanonets/Nanonets-OCR2-1.5B-exp")
5 messages = [
6 {
5 frames
/usr/local/lib/python3.12/dist-packages/transformers/configuration_utils.py in layer_type_validation(layer_types, num_hidden_layers)
1375 raise ValueError(f"The layer_types entries must be in {ALLOWED_LAYER_TYPES}")
1376 if num_hidden_layers is not None and num_hidden_layers != len(layer_types):
-> 1377 raise ValueError(
1378 f"num_hidden_layers ({num_hidden_layers}) must be equal to the number of layer types "
1379 f"({len(layer_types)})"
ValueError: num_hidden_layers (16) must be equal to the number of layer types (28)
Same problem, i am using a macbook pro
Same launching on vLLM
same running with AutoModelForImageTextToText on cpu
Same on Colab
refer to https://huggingface.co/nanonets/Nanonets-OCR2-1.5B-exp/discussions/2๏ผ replace "layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
] to {
"layer_types": [
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention",
"full_attention"
]
} in config.json can handle this problem