--- license: mit thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/iyzgR89q50pp1T8HeeP15.png base_model: - zai-org/GLM-4.5-Air pipeline_tag: text-generation tags: - abliterated - derestricted - glm-4.5-air - unlimited - uncensored library_name: transformers ---
# Arli AI # GLM-4.5-Air-Derestricted
GLM-4.5-Air-Derestricted is a **Derestricted** version of [GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air), created by **[Arli AI](https://www.arliai.com)**. Our goal with this release is to provide a version of the model that removed refusal behaviors while maintaining the high-performance reasoning of the original GLM-4.5-Air. This is unlike regular abliteration which often inadvertently "lobotomizes" the model. ### Methodology: Norm-Preserving Biprojected Abliteration To achieve this, **[Arli AI](https://www.arliai.com)** utilized **Norm-Preserving Biprojected Abliteration**, a refined technique pioneered by Jim Lai (grimjim). You can read the full technical breakdown [in this article](https://huggingface.co/blog/grimjim/norm-preserving-biprojected-abliteration). **Why this matters:** Standard abliteration works by simply subtracting a "refusal vector" from the model's weights. While this works to uncensor a model, it is mathematically unprincipled. It alters the **magnitude** (or "loudness") of the neurons, destroying the delicate feature norms the model learned during training. This damage is why many uncensored models suffer from degraded logic or hallucinations. **How Norm-Preserving Biprojected Abliteration fixes it:** This model was modified using a three-step approach that removes refusals without breaking the model's brain: 1. **Biprojection (Targeting):** We refined the refusal direction to ensure it is mathematically orthogonal to "harmless" directions. This ensures that when we cut out the refusal behavior, we do not accidentally cut out healthy, harmless concepts. 2. **Decomposition:** Instead of a raw subtraction, we decomposed the model weights into **Magnitude** and **Direction**. 3. **Norm-Preservation:** We removed the refusal component solely from the *directional* aspect of the weights, then recombined them with their **original magnitudes**. **The Result:** By preserving the weight norms, we maintain the "importance" structure of the neural network. Benchmarks suggest that this method avoids the "Safety Tax"—not only effectively removing refusals but potentially **improving reasoning capabilities** over the baseline, as the model is no longer wasting compute resources on suppressing its own outputs. In fact, you may find surprising new knowledge and capabilities that the original model does not initially expose. **Quantization:** - Original: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted - FP8: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted-FP8 - INT8: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted-W8A8-INT8 - W4A16: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted-GPTQ-W4A16 --- ## Original model card:

👋 Join our Discord community.
📖 Check out the GLM-4.5 technical blog, technical report, and Zhipu AI technical documentation.
📍 Use GLM-4.5 API services on Z.ai API Platform (Global) or
Zhipu AI Open Platform (Mainland China).
👉 One click to GLM-4.5.

## Model Introduction The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications. Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses. We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development. As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency. ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png) For more eval results, show cases, and technical details, please visit our [technical blog](https://z.ai/blog/glm-4.5) or [technical report](https://huggingface.co/papers/2508.06471). The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py). ## Quick Start Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.