runtime error
Exit code: 1. Reason: zer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 23: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 24: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 26: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 27: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 28: general.quantization_version u32 = 2 llama_model_loader: - kv 29: general.file_type u32 = 15 llama_model_loader: - type f32: 145 tensors llama_model_loader: - type q4_K: 217 tensors llama_model_loader: - type q6_K: 37 tensors llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen3vl' llama_load_model_from_file: failed to load model 正在下载模型... 正在加载模型到内存... Traceback (most recent call last): File "/home/user/app/app.py", line 15, in <module> llm = Llama( model_path=model_path, n_ctx=2048, n_threads=2 ) File "/usr/local/lib/python3.13/site-packages/llama_cpp/llama.py", line 369, in __init__ internals.LlamaModel( ~~~~~~~~~~~~~~~~~~~~^ path_model=self.model_path, ^^^^^^^^^^^^^^^^^^^^^^^^^^^ params=self.model_params, ^^^^^^^^^^^^^^^^^^^^^^^^^ verbose=self.verbose, ^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/llama_cpp/_internals.py", line 56, in __init__ raise ValueError(f"Failed to load model from file: {path_model}") ValueError: Failed to load model from file: /home/user/.cache/huggingface/hub/models--Qwen--Qwen3-VL-8B-Instruct-GGUF/snapshots/f982a07559d4a2f6c8744d840bf6fccab30eea96/Qwen3VL-8B-Instruct-Q4_K_M.gguf
Container logs:
Fetching error logs...