[Tip] Running Solar-Open-100B on vLLM - workaround for two compatibility issues
#25
by
davi0600
- opened
Following the issue reported in the last discussion, I found a tricky workaround that gets vLLM working with Solar-Open-100B for now.
Fix 1: ALLOWED_LAYER_TYPES ImportError
vllm/config/model.py, line 14:
# Before
from transformers.configuration_utils import ALLOWED_LAYER_TYPES
# After
from transformers.configuration_utils import ALLOWED_MLP_LAYER_TYPES
ALLOWED_LAYER_TYPES = ALLOWED_MLP_LAYER_TYPES
Fix 2: use_qk_norm AttributeError in SolarOpenDecoderLayer
vllm/model_executor/models/solar_open.py, in SolarOpenDecoderLayer class:
# Before
use_qk_norm=config.use_qk_norm,
# After
use_qk_norm=getattr(config, "use_qk_norm", False),
After these two patches, vllm serve upstage/Solar-Open-100B --tensor-parallel-size 4 loads and generates correctly.
These are hacky workarounds, not proper upstream fixes. But for anyone stuck right now, this should get you going.
Env: Upstage custom vLLM (0.12.1.dev1+solaropen) / transformers==5.0.0 / CUDA 12.8 / 4x GPU
There was one more place to patch.
diff -ruN a/vllm/config/model.py b/vllm/config/model.py
--- a/vllm/config/model.py
+++ b/vllm/config/model.py
@@ -11,7 +11,8 @@
from pydantic import ConfigDict, SkipValidation, field_validator, model_validator
from pydantic.dataclasses import dataclass
from safetensors.torch import _TYPES as _SAFETENSORS_TO_TORCH_DTYPE
-from transformers.configuration_utils import ALLOWED_LAYER_TYPES
+from transformers.configuration_utils import ALLOWED_MLP_LAYER_TYPES
+ALLOWED_LAYER_TYPES = ALLOWED_MLP_LAYER_TYPES
import vllm.envs as envs
from vllm.attention.backends.registry import AttentionBackendEnum
from vllm.config.multimodal import MMCacheType, MMEncoderTPMode, MultiModalConfig
diff -ruN a/vllm/model_executor/models/solar_open.py b/vllm/model_executor/models/solar_open.py
--- a/vllm/model_executor/models/solar_open.py
+++ b/vllm/model_executor/models/solar_open.py
@@ -351,7 +351,7 @@
cache_config=cache_config,
quant_config=quant_config,
prefix=f"{prefix}.self_attn",
- use_qk_norm=config.use_qk_norm,
+ use_qk_norm=getattr(config, "use_qk_norm", False),
)
if (
diff -ruN a/vllm/transformers_utils/config.py b/vllm/transformers_utils/config.py
--- a/vllm/transformers_utils/config.py
+++ b/vllm/transformers_utils/config.py
@@ -15,7 +15,8 @@
)
from packaging.version import Version
from transformers import GenerationConfig, PretrainedConfig
-from transformers.configuration_utils import ALLOWED_LAYER_TYPES
+from transformers.configuration_utils import ALLOWED_MLP_LAYER_TYPES
+ALLOWED_LAYER_TYPES = ALLOWED_MLP_LAYER_TYPES
from transformers.models.auto.image_processing_auto import get_image_processor_config
from transformers.models.auto.modeling_auto import (
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES,