UD-Q4_K_XL Error when UD-Q3_K_XL Works.

#1
by mtcl - opened
(base) mukul@jarvis:~/dev-ai/llama.cpp$ CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES="0,1" ./build/bin/llama-server \
        --model /media/mukul/data/models/unsloth/GLM-4.7-GGUF/UD-Q4_K_XL/GLM-4.7-UD-Q4_K_XL-00001-of-00005.gguf \
        --alias unsloth/GLM-4.7 \
        --ctx-size 131072 \
        -fa on \
        -np 1 -kvu \
        --temp 0.6 \
        --top-p 0.95 \
        --top-k 40 \
        -b 4096 -ub 4096 \
        -ngl 99 \
        -ot ".ffn_(up)_exps.=CPU" \
        --threads 56 \
        --jinja \
        --host 0.0.0.0 \
        --port 10002
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
  Device 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, compute capability 12.0, VMM: yes
  Device 1: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, compute capability 12.0, VMM: yes
build: 7517 (a6a552e4e) with GNU 13.3.0 for Linux x86_64
system info: n_threads = 56, n_threads_batch = 56, total_threads = 112

system_info: n_threads = 56 (n_threads_batch = 56) / 112 | CUDA : ARCHS = 1200 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | AMX_INT8 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | 

init: using 111 threads for HTTP server
start: binding port with default address family
main: loading model
srv    load_model: loading model '/media/mukul/data/models/unsloth/GLM-4.7-GGUF/UD-Q4_K_XL/GLM-4.7-UD-Q4_K_XL-00001-of-00005.gguf'
common_init_result: fitting params to device memory, for bugs during this step try to reproduce them with -fit off, or provide --verbose logs if the bug only occurs with -fit on
llama_model_load: error loading model: invalid model: tensor 'blk.90.ffn_up_exps.weight' is duplicated
llama_model_load_from_file_impl: failed to load model
llama_params_fit: failed to fit params to free device memory: failed to load model
llama_params_fit: fitting params to free memory took 0.24 seconds
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA RTX PRO 6000 Blackwell Workstation Edition) (0000:16:00.0) - 95580 MiB free
llama_model_load_from_file_impl: using device CUDA1 (NVIDIA RTX PRO 6000 Blackwell Workstation Edition) (0000:ac:00.0) - 96674 MiB free
llama_model_load: error loading model: invalid model: tensor 'blk.90.ffn_up_exps.weight' is duplicated
llama_model_load_from_file_impl: failed to load model
common_init_from_params: failed to load model '/media/mukul/data/models/unsloth/GLM-4.7-GGUF/UD-Q4_K_XL/GLM-4.7-UD-Q4_K_XL-00001-of-00005.gguf'
srv    load_model: failed to load model, '/media/mukul/data/models/unsloth/GLM-4.7-GGUF/UD-Q4_K_XL/GLM-4.7-UD-Q4_K_XL-00001-of-00005.gguf'
srv    operator(): operator(): cleaning up before exit...
main: exiting due to model loading error
(base) mukul@jarvis:~/dev-ai/llama.cpp$ 
Unsloth AI org

We uploaded a new one apologies so we overrode it which might be the reason why you got the error. Could you redownload and try again? Thanks!

No worries :)Do i need to download all 5 files, or just the first one?

Unsloth AI org

If you are using snapshot_download like below:

# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "0" # Can sometimes rate limit, so set to 0 to disable
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id = "unsloth/GLM-4.7-GGUF",
    local_dir = "unsloth/GLM-4.7-GGUF",
    allow_patterns = ["*UD-Q2_K_XL*"], # Dynamic 2bit Use "*UD-TQ1_0*" for Dynamic 1bit
)

it'll re-download the changed files - sorry again! The new ones are imatrix calibrated so you will definitely get better results.

This comment has been hidden (marked as Abuse)
Unsloth AI org

@qpqpqpqpqpqp which issue are you referring to sorry?

i redownloaded the files and it works now. For some reason when I downloaded the first time, i got error while loading. I deleted all 5 files and downloaded it again. All good now. You can close this one.

Unsloth AI org

@mtcl ok nice! It's ok we can leave the issue open

I went through your issues in the last 10 weeks and you never made any issue on any Unsloth repo. Which issue are you referring to - maybe you were confused on posting on another repo or user? @qpqpqpqpqpqp

Unsloth AI org

Looks like qpqpqpqpqpqp is being dishonest as you still haven’t provided any evidence or response. That’s not okay, and it comes across as bad faith.

Sign up or log in to comment