GLM-4.5-Air-Derestricted AWQ - INT4

Model Details

Quantization Details

Memory Usage

Type GLM-4.5-Air-Derestricted GLM-4.5-Air-Derestricted-AWQ-4bit
Memory Size 205.8 GB 59.0 GB
KV Cache per Token 61.3 kB 15.3 kB
KV Cache per Context 7.7 GB 1.9 GB

Inference

Prerequisite

pip install -U vllm

Basic Usage

vllm serve cyankiwi/GLM-4.5-Air-Derestricted-AWQ-4bit

Additional Information

Known Issues

  • tensor-parallel-size > 2 requires --enable-expert-parallel
  • No MTP implementation

Changelog

  • v0.9.0 - Initial quantized release without MTP implementation

Authors

Arli AI

GLM-4.5-Air-Derestricted

GLM-4.5-Air-Derestricted is a Derestricted version of GLM-4.5-Air, created by Arli AI.

Our goal with this release is to provide a version of the model that removed refusal behaviors while maintaining the high-performance reasoning of the original GLM-4.5-Air. This is unlike regular abliteration which often inadvertently "lobotomizes" the model.

Methodology: Norm-Preserving Biprojected Abliteration

To achieve this, Arli AI utilized Norm-Preserving Biprojected Abliteration, a refined technique pioneered by Jim Lai (grimjim). You can read the full technical breakdown in this article.

Why this matters:

Standard abliteration works by simply subtracting a "refusal vector" from the model's weights. While this works to uncensor a model, it is mathematically unprincipled. It alters the magnitude (or "loudness") of the neurons, destroying the delicate feature norms the model learned during training. This damage is why many uncensored models suffer from degraded logic or hallucinations.

How Norm-Preserving Biprojected Abliteration fixes it:

This model was modified using a three-step approach that removes refusals without breaking the model's brain:

  1. Biprojection (Targeting): We refined the refusal direction to ensure it is mathematically orthogonal to "harmless" directions. This ensures that when we cut out the refusal behavior, we do not accidentally cut out healthy, harmless concepts.
  2. Decomposition: Instead of a raw subtraction, we decomposed the model weights into Magnitude and Direction.
  3. Norm-Preservation: We removed the refusal component solely from the directional aspect of the weights, then recombined them with their original magnitudes.

The Result:

By preserving the weight norms, we maintain the "importance" structure of the neural network. Benchmarks suggest that this method avoids the "Safety Tax"鈥攏ot only effectively removing refusals but potentially improving reasoning capabilities over the baseline, as the model is no longer wasting compute resources on suppressing its own outputs.

In fact, you may find surprising new knowledge and capabilities that the original model does not initially expose.

Quantization:


Original model card:

馃憢 Join our Discord community.
馃摉 Check out the GLM-4.5 technical blog, technical report, and Zhipu AI technical documentation.
馃搷 Use GLM-4.5 API services on Z.ai API Platform (Global) or
Zhipu AI Open Platform (Mainland China).
馃憠 One click to GLM-4.5.

Model Introduction

The GLM-4.5 series models are foundation models designed for intelligent agents. GLM-4.5 has 355 billion total parameters with 32 billion active parameters, while GLM-4.5-Air adopts a more compact design with 106 billion total parameters and 12 billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.

Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.

We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.

As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of 63.2, in the 3rd place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at 59.8 while maintaining superior efficiency.

bench

For more eval results, show cases, and technical details, please visit our technical blog or technical report.

The model code, tool parser and reasoning parser can be found in the implementation of transformers, vLLM and SGLang.

Quick Start

Please refer our github page for more detail.

Downloads last month
36
Safetensors
Model size
19B params
Tensor type
I64
F32
I32
BF16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for cyankiwi/GLM-4.5-Air-Derestricted-AWQ-4bit

Quantized
(21)
this model