Dataset Viewer
Auto-converted to Parquet Duplicate
_id
string
id
string
author
string
baseModels
dict
downloads
int64
downloads_all_time
int64
gated
string
created_at
timestamp[us, tz=UTC]
last_modified
timestamp[us, tz=UTC]
library_name
string
likes
int64
trending_score
float64
model_index
string
pipeline_tag
string
safetensors
string
siblings
list
sizes
list
total_size
int64
sha
string
tags
list
gguf
string
card
string
spaces
list
licenses
list
datasets
list
languages
list
safetensors_params
float64
gguf_params
float64
tasks
list
metrics
list
architectures
list
modalities
list
input_modalities
list
output_modalities
list
org_model
string
org_type
string
org_country
list
a_gated
string
a_baseModels
string
a_input_modalities
list
a_output_modalities
list
a_architectures
list
a_languages
list
a_training_methods
list
a_ddpa
string
annotator
int64
68ac69484a1f0871ddf555e4
microsoft/VibeVoice-1.5B
microsoft
null
87,188
87,188
False
2025-08-25T13:46:48
2025-08-28T04:57:59
null
1,117
1,117
null
text-to-speech
{"parameters": {"BF16": 2704021985}, "total": 2704021985}
[ ".gitattributes", "README.md", "config.json", "figures/Fig1.png", "model-00001-of-00003.safetensors", "model-00002-of-00003.safetensors", "model-00003-of-00003.safetensors", "model.safetensors.index.json", "preprocessor_config.json" ]
[ 1603, 7273, 2762, 153971, 1975317828, 1983051688, 1449832938, 122616, 351 ]
5,408,491,030
cf42b8ff262f8a286bcbe580835cfaad62d277ca
[ "safetensors", "vibevoice", "Podcast", "text-to-speech", "en", "zh", "arxiv:2508.19205", "arxiv:2412.08635", "license:mit", "region:us" ]
null
## VibeVoice: A Frontier Open-Source Text-to-Speech Model VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking. A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details. The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models. ➡️ **Technical Report:** [VibeVoice Technical Report](https://arxiv.org/abs/2508.19205) ➡️ **Project Page:** [microsoft/VibeVoice](https://microsoft.github.io/VibeVoice) ➡️ **Code:** [microsoft/VibeVoice-Code](https://github.com/microsoft/VibeVoice) <p align="left"> <img src="figures/Fig1.png" alt="VibeVoice Overview" height="250px"> </p> ## Training Details Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head. - LLM: [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) for this release. - Tokenizers: - Acoustic Tokenizer: Based on a σ-VAE variant (proposed in [LatentLM](https://arxiv.org/pdf/2412.08635)), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each. - Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task. - Diffusion Head: Lightweight module (4 layers, ~123M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference. - Context Length: Trained with a curriculum increasing up to 65,536 tokens. - Training Stages: - Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately. - VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K -> 64K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers. ## Models | Model | Context Length | Generation Length | Weight | |-------|----------------|----------|----------| | VibeVoice-0.5B-Streaming | - | - | On the way | | VibeVoice-1.5B | 64K | ~90 min | You are here. | | VibeVoice-7B-Preview| 32K | ~45 min | [HF link](https://huggingface.co/WestZhang/VibeVoice-Large-pt) | ## Installation and Usage Please refer to [GitHub README](https://github.com/microsoft/VibeVoice?tab=readme-ov-file#installation) ## Responsible Usage ### Direct intended uses The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the [tech report](https://github.com/microsoft/VibeVoice/blob/main/report/TechnicalReport.pdf). ### Out-of-scope uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios: - Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass. - Disinformation or impersonation – creating audio presented as genuine recordings of real people or events. - Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications. - Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive. - Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio. ## Risks and limitations While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model (specifically, Qwen2.5 1.5b in this release). Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content. English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs. Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects. Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations. ## Recommendations We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly. To mitigate the risks of misuse, we have: Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file. Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card. Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly. Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns. ## Contact This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at VibeVoice@microsoft.com. If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
[ "broadfield-dev/VibeVoice-demo", "yasserrmd/VibeVoice", "broadfield-dev/VibeVoice-demo-dev", "akhaliq/VibeVoice-1.5B", "mrfakename/VibeVoice-1.5B", "NeuralFalcon/VibeVoice-Colab", "thelip/VibeVoice", "ReallyFloppyPenguin/VibeVoice-demo", "Xenobd/VibeVoice-demo", "Dorjzodovsuren/VibeVoice", "umint/o4-mini", "krishna-ag/ms-vibe-voice", "Shubhvedi/Vibe-Voice-TTS", "danhtran2mind/VibeVoice", "SiddhJagani/Voice", "pierreguillou/VibeVoice-demo", "PunkTink/VibeVoice-mess", "ginipick/VibeVoice-demo", "umint/gpt-4.1-nano", "umint/o3", "jonathanagustin/vibevoice" ]
[ "mit" ]
null
[ "en", "zh" ]
2,704,021,985
null
[ "text-to-speech" ]
null
[ "VibeVoiceForConditionalGeneration", "vibevoice" ]
[ "audio" ]
[ "text" ]
[ "audio" ]
free
company
[ "United States of America", "International", "India", "Belgium" ]
null
null
null
null
null
null
null
null
null
68aaebfbfe684542cfc51e66
openbmb/MiniCPM-V-4_5
openbmb
null
9,706
9,706
False
2025-08-24T10:39:55
2025-08-31T14:57:14
transformers
747
747
null
image-text-to-text
{"parameters": {"BF16": 8695895280}, "total": 8695895280}
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "configuration_minicpm.py", "generation_config.json", "image_processing_minicpmv.py", "merges.txt", "model-00001-of-00004.safetensors", "model-00002-of-00004.safetensors", "model-00003-of-00004.safetensors", "model-00004-of-00004.safetensors", "model.safetensors.index.json", "modeling_minicpmv.py", "modeling_navit_siglip.py", "preprocessor_config.json", "processing_minicpmv.py", "resampler.py", "special_tokens_map.json", "tokenization_minicpmv_fast.py", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
[ 1570, 24775, 2862, 1461, 3288, 268, 20757, 1671853, 5286612176, 5301855088, 4546851120, 2256571800, 72172, 17754, 41835, 714, 11026, 11732, 12103, 1647, 11437868, 25786, 2776833 ]
17,408,026,488
17353d11601386fac6cca5a541e84b85928bd4ae
[ "transformers", "safetensors", "minicpmv", "feature-extraction", "minicpm-v", "vision", "ocr", "multi-image", "video", "custom_code", "image-text-to-text", "conversational", "multilingual", "dataset:openbmb/RLAIF-V-Dataset", "arxiv:2403.11703", "region:us" ]
null
<h1>A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone</h1> [GitHub](https://github.com/OpenBMB/MiniCPM-o) | [Demo](http://101.126.42.235:30910/)</a> ## MiniCPM-V 4.5 **MiniCPM-V 4.5** is the latest and most capable model in the MiniCPM-V series. The model is built on Qwen3-8B and SigLIP2-400M with a total of 8B parameters. It exhibits a significant performance improvement over previous MiniCPM-V and MiniCPM-o models, and introduces new useful features. Notable features of MiniCPM-V 4.5 include: - 🔥 **State-of-the-art Vision-Language Capability.** MiniCPM-V 4.5 achieves an average score of 77.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-latest, Gemini-2.0 Pro, and strong open-source models like Qwen2.5-VL 72B** for vision-language capabilities, making it the most performant MLLM under 30B parameters. - 🎬 **Efficient High-FPS and Long Video Understanding.** Powered by a new unified 3D-Resampler over images and videos, MiniCPM-V 4.5 can now achieve 96x compression rate for video tokens, where 6 448x448 video frames can be jointly compressed into 64 video tokens (normally 1,536 tokens for most MLLMs). This means that the model can perceive significantly more video frames without increasing the LLM inference cost. This brings state-of-the-art high-FPS (up to 10FPS) video understanding and long video understanding capabilities on Video-MME, LVBench, MLVU, MotionBench, FavorBench, etc., efficiently. - ⚙️ **Controllable Hybrid Fast/Deep Thinking.** MiniCPM-V 4.5 supports both fast thinking for efficient frequent usage with competitive performance, and deep thinking for more complex problem solving. To cover efficiency and performance trade-offs in different user scenarios, this fast/deep thinking mode can be switched in a highly controlled fashion. - 💪 **Strong OCR, Document Parsing and Others.** Based on [LLaVA-UHD](https://arxiv.org/pdf/2403.11703) architecture, MiniCPM-V 4.5 can process high-resolution images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), using 4x less visual tokens than most MLLMs. The model achieves **leading performance on OCRBench, surpassing proprietary models such as GPT-4o-latest and Gemini 2.5**. It also achieves state-of-the-art performance for PDF document parsing capability on OmniDocBench among general MLLMs. Based on the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o-latest on MMHal-Bench, and supports **multilingual capabilities** in more than 30 languages. - 💫 **Easy Usage.** MiniCPM-V 4.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/tc-mb/llama.cpp/blob/Support-MiniCPM-V-4.5/docs/multimodal/minicpmv4.5.md) and [ollama](https://github.com/tc-mb/ollama/tree/MIniCPM-V) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4), [GGUF](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf) and [AWQ](https://github.com/tc-mb/AutoAWQ) format quantized models in 16 sizes, (3) [SGLang](https://github.com/tc-mb/sglang/tree/main) and [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [Transformers](https://github.com/tc-mb/transformers/tree/main) and [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), (6) optimized [local iOS app](https://github.com/tc-mb/MiniCPM-o-demo-iOS) on iPhone and iPad, and (7) online web demo on [server](http://101.126.42.235:30910/). See our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) for full usages! ### Key Techniques <div align="center"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpm-v-4dot5-framework.png" , width=100%> </div> - **Architechture: Unified 3D-Resampler for High-density Video Compression.** MiniCPM-V 4.5 introduces a 3D-Resampler that overcomes the performance-efficiency trade-off in video understanding. By grouping and jointly compressing up to 6 consecutive video frames into just 64 tokens (the same token count used for a single image in MiniCPM-V series), MiniCPM-V 4.5 achieves a 96× compression rate for video tokens. This allows the model to process more video frames without additional LLM computational cost, enabling high-FPS video and long video understanding. The architecture supports unified encoding for images, multi-image inputs, and videos, ensuring seamless capability and knowledge transfer. - **Pre-training: Unified Learning for OCR and Knowledge from Documents.** Existing MLLMs learn OCR capability and knowledge from documents in isolated training approaches. We observe that the essential difference between these two training approaches is the visibility of the text in images. By dynamically corrupting text regions in documents with varying noise levels and asking the model to reconstruct the text, the model learns to adaptively and properly switch between accurate text recognition (when text is visible) and multimodal context-based knowledge reasoning (when text is heavily obscured). This eliminates reliance on error-prone document parsers in knowledge learning from documents, and prevents hallucinations from over-augmented OCR data, resulting in top-tier OCR and multimodal knowledge performance with minimal engineering overhead. - **Post-training: Hybrid Fast/Deep Thinking with Multimodal RL.** MiniCPM-V 4.5 offers a balanced reasoning experience through two switchable modes: fast thinking for efficient daily use and deep thinking for complex tasks. Using a new hybrid reinforcement learning method, the model jointly optimizes both modes, significantly enhancing fast-mode performance without compromising deep-mode capability. Incorporated with [RLPR](https://github.com/OpenBMB/RLPR) and [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), it generalizes robust reasoning skills from broad multimodal data while effectively reducing hallucinations. ### Evaluation <div align="center"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/radar_minicpm_v45.png", width=60%> </div> <div align="center"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv_4_5_evaluation_result.png" , width=100%> </div> ### Inference Efficiency **OpenCompass** <div align="left"> <table style="margin: 0px auto;"> <thead> <tr> <th align="left">Model</th> <th>Size</th> <th>Avg Score ↑</th> <th>Total Inference Time ↓</th> </tr> </thead> <tbody align="center"> <tr> <td nowrap="nowrap" align="left">GLM-4.1V-9B-Thinking</td> <td>10.3B</td> <td>76.6</td> <td>17.5h</td> </tr> <tr> <td nowrap="nowrap" align="left">MiMo-VL-7B-RL</td> <td>8.3B</td> <td>76.4</td> <td>11h</td> </tr> <tr> <td nowrap="nowrap" align="left">MiniCPM-V 4.5</td> <td>8.7B</td> <td><b>77.0</td> <td><b>7.5h</td> </tr> </tbody> </table> </div> **Video-MME** <div align="left"> <table style="margin: 0px auto;"> <thead> <tr> <th align="left">Model</th> <th>Size</th> <th>Avg Score ↑</th> <th>Total Inference Time ↓</th> <th>GPU Mem ↓</th> </tr> </thead> <tbody align="center"> <tr> <td nowrap="nowrap" align="left">Qwen2.5-VL-7B-Instruct</td> <td>8.3B</td> <td>71.6</td> <td>3h</td> <td>60G</td> </tr> <tr> <td nowrap="nowrap" align="left">GLM-4.1V-9B-Thinking</td> <td>10.3B</td> <td><b>73.6</td> <td>2.63h</td> <td>32G</td> </tr> <tr> <td nowrap="nowrap" align="left">MiniCPM-V 4.5</td> <td>8.7B</td> <td>73.5</td> <td><b>0.26h</td> <td><b>28G</td> </tr> </tbody> </table> </div> Both Video-MME and OpenCompass were evaluated using 8×A100 GPUs for inference. The reported inference time of Video-MME includes full model-side computation, and excludes the external cost of video frame extraction (dependent on specific frame extraction tools) for fair comparison. ### Examples <div align="center"> <a href="https://www.youtube.com/watch?v=Cn23FujYMMU"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/MiniCPM-V%204.5-8.26_img.jpeg", width=70%></a> </div> <div style="display: flex; flex-direction: column; align-items: center;"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case1.png" alt="en_case1" style="margin-bottom: 5px;"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case2.png" alt="en_case2" style="margin-bottom: 5px;"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/en_case3.jpeg" alt="en_case3" style="margin-bottom: 5px;"> </div> We deploy MiniCPM-V 4.5 on iPad M4 with [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS). The demo video is the raw screen recording without editing. <div align="center"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_en_handwriting.gif" width="45%" style="display: inline-block; margin: 0 10px;"/> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_en_cot.gif" width="45%" style="display: inline-block; margin: 0 10px;"/> </div> <div align="center"> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_cn_handwriting.gif" width="45%" style="display: inline-block; margin: 0 10px;"/> <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-o/main/assets/minicpmv4_5/v45_cn_travel.gif" width="45%" style="display: inline-block; margin: 0 10px;"/> </div> ## Usage If you wish to enable thinking mode, provide the argument `enable_thinking=True` to the chat function. #### Chat with Image ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer torch.manual_seed(100) model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 image = Image.open('./assets/minicpmo2_6/show_demo.jpg').convert('RGB') enable_thinking=False # If `enable_thinking=True`, the thinking mode is enabled. stream=True # If `stream=True`, the answer is string # First round chat question = "What is the landform in the picture?" msgs = [{'role': 'user', 'content': [image, question]}] answer = model.chat( msgs=msgs, tokenizer=tokenizer, enable_thinking=enable_thinking, stream=True ) generated_text = "" for new_text in answer: generated_text += new_text print(new_text, flush=True, end='') # Second round chat, pass history context of multi-turn conversation msgs.append({"role": "assistant", "content": [answer]}) msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]}) answer = model.chat( msgs=msgs, tokenizer=tokenizer, stream=True ) generated_text = "" for new_text in answer: generated_text += new_text print(new_text, flush=True, end='') ``` You will get the following output: ```shell # round1 The landform in the picture is karst topography. Karst landscapes are characterized by distinctive, jagged limestone hills or mountains with steep, irregular peaks and deep valleys—exactly what you see here These unique formations result from the dissolution of soluble rocks like limestone over millions of years through water erosion. This scene closely resembles the famous karst landscape of Guilin and Yangshuo in China’s Guangxi Province. The area features dramatic, pointed limestone peaks rising dramatically above serene rivers and lush green forests, creating a breathtaking and iconic natural beauty that attracts millions of visitors each year for its picturesque views. # round2 When traveling to a karst landscape like this, here are some important tips: 1. Wear comfortable shoes: The terrain can be uneven and hilly. 2. Bring water and snacks for energy during hikes or boat rides. 3. Protect yourself from the sun with sunscreen, hats, and sunglasses—especially since you’ll likely spend time outdoors exploring scenic spots. 4. Respect local customs and nature regulations by not littering or disturbing wildlife. By following these guidelines, you'll have a safe and enjoyable trip while appreciating the stunning natural beauty of places such as Guilin’s karst mountains. ``` #### Chat with Video ```python ## The 3d-resampler compresses multiple frames into 64 tokens by introducing temporal_ids. # To achieve this, you need to organize your video data into two corresponding sequences: # frames: List[Image] # temporal_ids: List[List[Int]]. import torch from PIL import Image from transformers import AutoModel, AutoTokenizer from decord import VideoReader, cpu # pip install decord from scipy.spatial import cKDTree import numpy as np import math model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 MAX_NUM_FRAMES=180 # Indicates the maximum number of frames received after the videos are packed. The actual maximum number of valid frames is MAX_NUM_FRAMES * MAX_NUM_PACKING. MAX_NUM_PACKING=3 # indicates the maximum packing number of video frames. valid range: 1-6 TIME_SCALE = 0.1 def map_to_nearest_scale(values, scale): tree = cKDTree(np.asarray(scale)[:, None]) _, indices = tree.query(np.asarray(values)[:, None]) return np.asarray(scale)[indices] def group_array(arr, size): return [arr[i:i+size] for i in range(0, len(arr), size)] def encode_video(video_path, choose_fps=3, force_packing=None): def uniform_sample(l, n): gap = len(l) / n idxs = [int(i * gap + gap / 2) for i in range(n)] return [l[i] for i in idxs] vr = VideoReader(video_path, ctx=cpu(0)) fps = vr.get_avg_fps() video_duration = len(vr) / fps if choose_fps * int(video_duration) <= MAX_NUM_FRAMES: packing_nums = 1 choose_frames = round(min(choose_fps, round(fps)) * min(MAX_NUM_FRAMES, video_duration)) else: packing_nums = math.ceil(video_duration * choose_fps / MAX_NUM_FRAMES) if packing_nums <= MAX_NUM_PACKING: choose_frames = round(video_duration * choose_fps) else: choose_frames = round(MAX_NUM_FRAMES * MAX_NUM_PACKING) packing_nums = MAX_NUM_PACKING frame_idx = [i for i in range(0, len(vr))] frame_idx = np.array(uniform_sample(frame_idx, choose_frames)) if force_packing: packing_nums = min(force_packing, MAX_NUM_PACKING) print(video_path, ' duration:', video_duration) print(f'get video frames={len(frame_idx)}, packing_nums={packing_nums}') frames = vr.get_batch(frame_idx).asnumpy() frame_idx_ts = frame_idx / fps scale = np.arange(0, video_duration, TIME_SCALE) frame_ts_id = map_to_nearest_scale(frame_idx_ts, scale) / TIME_SCALE frame_ts_id = frame_ts_id.astype(np.int32) assert len(frames) == len(frame_ts_id) frames = [Image.fromarray(v.astype('uint8')).convert('RGB') for v in frames] frame_ts_id_group = group_array(frame_ts_id, packing_nums) return frames, frame_ts_id_group video_path="video_test.mp4" fps = 5 # fps for video force_packing = None # You can set force_packing to ensure that 3D packing is forcibly enabled; otherwise, encode_video will dynamically set the packing quantity based on the duration. frames, frame_ts_id_group = encode_video(video_path, fps, force_packing=force_packing) question = "Describe the video" msgs = [ {'role': 'user', 'content': frames + [question]}, ] answer = model.chat( msgs=msgs, tokenizer=tokenizer, use_image_id=False, max_slice_nums=1, temporal_ids=frame_ts_id_group ) print(answer) ``` #### Chat with multiple images <details> <summary> Click to show Python code running MiniCPM-V 4.5 with multiple images input. </summary> ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2 model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) image1 = Image.open('image1.jpg').convert('RGB') image2 = Image.open('image2.jpg').convert('RGB') question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' msgs = [{'role': 'user', 'content': [image1, image2, question]}] answer = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(answer) ``` </details> ## License #### Model License * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. * The usage of MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM-o/blob/main/MiniCPM%20Model%20License.md). * The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-V 4.5 weights are also available for free commercial use. #### Statement * As an LMM, MiniCPM-V 4.5 generates contents by learning a large amount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V 4.5 does not represent the views and positions of the model developers * We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. ## Key Techniques and Other Multimodal Projects 👏 Welcome to explore key techniques of MiniCPM-V 4.5 and other multimodal projects of our team: [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLPR](https://github.com/OpenBMB/RLPR) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) ## Citation If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️! ```bib @article{yao2024minicpm, title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone}, author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others}, journal={Nat Commun 16, 5509 (2025)}, year={2025} } ```
[ "akhaliq/MiniCPM-V-4_5", "orrzxz/MiniCPM-V-4_5", "WYC-2025/MiniCPM-V-4_5", "CGQN/MiniCPM-V-4_5", "CGQN/MiniCPM-V-4_5-from_gpt5", "CGQN/MiniCPM-V-4_5-CPU-0" ]
null
[ "openbmb/RLAIF-V-Dataset" ]
[ "multilingual" ]
8,695,895,280
null
[ "feature-extraction", "image-text-to-text" ]
null
[ "modeling_minicpmv.MiniCPMV", "MiniCPMV", "AutoModel", "minicpmv" ]
[ "multimodal" ]
[ "text", "image" ]
[ "embeddings", "text" ]
free
community
[ "China" ]
null
null
null
null
null
null
null
null
null
68a8de283195d5730fd2c5b8
xai-org/grok-2
xai-org
null
4,047
4,047
False
2025-08-22T21:16:24
2025-08-24T00:59:56
null
879
485
null
null
null
[ ".gitattributes", "LICENSE", "README.md", "config.json", "pytorch_model-00000-TP-common.safetensors", "pytorch_model-00001-TP-common.safetensors", "pytorch_model-00002-TP-common.safetensors", "pytorch_model-00003-TP-common.safetensors", "pytorch_model-00004-TP-common.safetensors", "pytorch_model-00005-TP-common.safetensors", "pytorch_model-00006-TP-000.safetensors", "pytorch_model-00006-TP-001.safetensors", "pytorch_model-00006-TP-002.safetensors", "pytorch_model-00006-TP-003.safetensors", "pytorch_model-00006-TP-004.safetensors", "pytorch_model-00006-TP-005.safetensors", "pytorch_model-00006-TP-006.safetensors", "pytorch_model-00006-TP-007.safetensors", "pytorch_model-00007-TP-000.safetensors", "pytorch_model-00007-TP-001.safetensors", "pytorch_model-00007-TP-002.safetensors", "pytorch_model-00007-TP-003.safetensors", "pytorch_model-00007-TP-004.safetensors", "pytorch_model-00007-TP-005.safetensors", "pytorch_model-00007-TP-006.safetensors", "pytorch_model-00007-TP-007.safetensors", "pytorch_model-00008-TP-000.safetensors", "pytorch_model-00008-TP-001.safetensors", "pytorch_model-00008-TP-002.safetensors", "pytorch_model-00008-TP-003.safetensors", "pytorch_model-00008-TP-004.safetensors", "pytorch_model-00008-TP-005.safetensors", "pytorch_model-00008-TP-006.safetensors", "pytorch_model-00008-TP-007.safetensors", "pytorch_model-00009-TP-common.safetensors", "pytorch_model-00010-TP-common.safetensors", "pytorch_model-00011-TP-common.safetensors", "pytorch_model-00012-TP-common.safetensors", "pytorch_model-00013-TP-common.safetensors", "pytorch_model-00014-TP-common.safetensors", "pytorch_model-00015-TP-common.safetensors", "pytorch_model-00016-TP-common.safetensors", "pytorch_model-00017-TP-common.safetensors", "tokenizer.tok.json" ]
[ 1519, 5362, 1583, 947, 2147483760, 2147483744, 16472, 34359745872, 34359745872, 34359745744, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 17179936544, 1073749240, 8589942120, 8589942120, 1073749240, 1055096, 1055160, 1055032, 1055096, 8395888, 7724637 ]
539,040,431,560
d60cbe267db8bb43be676bc80e200c64268ea8ec
[ "git", "region:us" ]
null
# Grok 2 This repository contains the weights of Grok 2, a model trained and used at xAI in 2024. ## Usage: Serving with SGLang - Download the weights. You can replace `/local/grok-2` with any other folder name you prefer. ``` hf download xai-org/grok-2 --local-dir /local/grok-2 ``` You might encounter some errors during the download. Please retry until the download is successful. If the download succeeds, the folder should contain **42 files** and be approximately 500 GB. - Launch a server. Install the latest SGLang inference engine (>= v0.5.1) from https://github.com/sgl-project/sglang/ Use the command below to launch an inference server. This checkpoint is TP=8, so you will need 8 GPUs (each with > 40GB of memory). ``` python3 -m sglang.launch_server --model /local/grok-2 --tokenizer-path /local/grok-2/tokenizer.tok.json --tp 8 --quantization fp8 --attention-backend triton ``` - Send a request. This is a post-trained model, so please use the correct [chat template](https://github.com/sgl-project/sglang/blob/97a38ee85ba62e268bde6388f1bf8edfe2ca9d76/python/sglang/srt/tokenizer/tiktoken_tokenizer.py#L106). ``` python3 -m sglang.test.send_one --prompt "Human: What is your name?<|separator|>\n\nAssistant:" ``` You should be able to see the model output its name, Grok. Learn more about other ways to send requests [here](https://docs.sglang.ai/basic_usage/send_request.html). ## License The weights are licensed under the [Grok 2 Community License Agreement](https://huggingface.co/xai-org/grok-2/blob/main/LICENSE).
[ "umint/o4-mini", "AnilNiraula/FinChat", "umint/gpt-4.1-nano", "umint/o3" ]
null
null
null
null
null
null
null
[ "Grok1ForCausalLM", "git" ]
null
null
null
team
company
[ "United States of America" ]
null
null
null
null
null
null
null
null
null
68a19381db43c983deb63fa5
Qwen/Qwen-Image-Edit
Qwen
null
75,516
75,516
False
2025-08-17T08:32:01
2025-08-25T04:41:11
diffusers
1,545
359
null
image-to-image
null
[ ".gitattributes", "README.md", "model_index.json", "processor/added_tokens.json", "processor/chat_template.jinja", "processor/merges.txt", "processor/preprocessor_config.json", "processor/special_tokens_map.json", "processor/tokenizer.json", "processor/tokenizer_config.json", "processor/video_preprocessor_config.json", "processor/vocab.json", "scheduler/scheduler_config.json", "text_encoder/config.json", "text_encoder/generation_config.json", "text_encoder/model-00001-of-00004.safetensors", "text_encoder/model-00002-of-00004.safetensors", "text_encoder/model-00003-of-00004.safetensors", "text_encoder/model-00004-of-00004.safetensors", "text_encoder/model.safetensors.index.json", "tokenizer/added_tokens.json", "tokenizer/chat_template.jinja", "tokenizer/merges.txt", "tokenizer/special_tokens_map.json", "tokenizer/tokenizer_config.json", "tokenizer/vocab.json", "transformer/config.json", "transformer/diffusion_pytorch_model-00001-of-00009.safetensors", "transformer/diffusion_pytorch_model-00002-of-00009.safetensors", "transformer/diffusion_pytorch_model-00003-of-00009.safetensors", "transformer/diffusion_pytorch_model-00004-of-00009.safetensors", "transformer/diffusion_pytorch_model-00005-of-00009.safetensors", "transformer/diffusion_pytorch_model-00006-of-00009.safetensors", "transformer/diffusion_pytorch_model-00007-of-00009.safetensors", "transformer/diffusion_pytorch_model-00008-of-00009.safetensors", "transformer/diffusion_pytorch_model-00009-of-00009.safetensors", "transformer/diffusion_pytorch_model.safetensors.index.json", "vae/config.json", "vae/diffusion_pytorch_model.safetensors" ]
[ 1580, 11747, 512, 605, 1017, 1671853, 788, 613, 11421896, 4727, 904, 2776833, 485, 3217, 244, 4968243304, 4991495816, 4932751040, 1691924384, 57655, 605, 2427, 1671853, 613, 4686, 3383407, 339, 4989364312, 4984214160, 4946470000, 4984213736, 4946471896, 4946451560, 4908690520, 4984232856, 1170918840, 198887, 730, 253806966 ]
57,720,467,613
ac7f9318f633fc4b5778c59367c8128225f1e3de
[ "diffusers", "safetensors", "image-to-image", "en", "zh", "arxiv:2508.02324", "license:apache-2.0", "diffusers:QwenImageEditPipeline", "region:us" ]
null
<p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" width="400"/> <p> <p align="center"> 💜 <a href="https://chat.qwen.ai/"><b>Qwen Chat</b></a>&nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/Qwen_Image.pdf">Tech Report</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://qwenlm.github.io/blog/qwen-image-edit/">Blog</a> &nbsp&nbsp <br> 🖥️ <a href="https://huggingface.co/spaces/Qwen/Qwen-Image-Edit">Demo</a>&nbsp&nbsp | &nbsp&nbsp💬 <a href="https://github.com/QwenLM/Qwen-Image/blob/main/assets/wechat.png">WeChat (微信)</a>&nbsp&nbsp | &nbsp&nbsp🫨 <a href="https://discord.gg/CV4E9rpNSD">Discord</a>&nbsp&nbsp| &nbsp&nbsp <a href="https://github.com/QwenLM/Qwen-Image">Github</a>&nbsp&nbsp </p> <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_homepage.jpg" width="1600"/> <p> # Introduction We are excited to introduce Qwen-Image-Edit, the image editing version of Qwen-Image. Built upon our 20B Qwen-Image model, Qwen-Image-Edit successfully extends Qwen-Image’s unique text rendering capabilities to image editing tasks, enabling precise text editing. Furthermore, Qwen-Image-Edit simultaneously feeds the input image into Qwen2.5-VL (for visual semantic control) and the VAE Encoder (for visual appearance control), achieving capabilities in both semantic and appearance editing. To experience the latest model, visit [Qwen Chat](https://qwen.ai) and select the "Image Editing" feature. Key Features: * **Semantic and Appearance Editing**: Qwen-Image-Edit supports both low-level visual appearance editing (such as adding, removing, or modifying elements, requiring all other regions of the image to remain completely unchanged) and high-level visual semantic editing (such as IP creation, object rotation, and style transfer, allowing overall pixel changes while maintaining semantic consistency). * **Precise Text Editing**: Qwen-Image-Edit supports bilingual (Chinese and English) text editing, allowing direct addition, deletion, and modification of text in images while preserving the original font, size, and style. * **Strong Benchmark Performance**: Evaluations on multiple public benchmarks demonstrate that Qwen-Image-Edit achieves state-of-the-art (SOTA) performance in image editing tasks, establishing it as a powerful foundation model for image editing. ## Quick Start Install the latest version of diffusers ``` pip install git+https://github.com/huggingface/diffusers ``` The following contains a code snippet illustrating how to use the model to generate images based on text prompts: ```python import os from PIL import Image import torch from diffusers import QwenImageEditPipeline pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit") print("pipeline loaded") pipeline.to(torch.bfloat16) pipeline.to("cuda") pipeline.set_progress_bar_config(disable=None) image = Image.open("./input.png").convert("RGB") prompt = "Change the rabbit's color to purple, with a flash light background." inputs = { "image": image, "prompt": prompt, "generator": torch.manual_seed(0), "true_cfg_scale": 4.0, "negative_prompt": " ", "num_inference_steps": 50, } with torch.inference_mode(): output = pipeline(**inputs) output_image = output.images[0] output_image.save("output_image_edit.png") print("image saved at", os.path.abspath("output_image_edit.png")) ``` ## Showcase One of the highlights of Qwen-Image-Edit lies in its powerful capabilities for semantic and appearance editing. Semantic editing refers to modifying image content while preserving the original visual semantics. To intuitively demonstrate this capability, let's take Qwen's mascot—Capybara—as an example: ![Capibara](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片3.JPG#center) As can be seen, although most pixels in the edited image differ from those in the input image (the leftmost image), the character consistency of Capybara is perfectly preserved. Qwen-Image-Edit's powerful semantic editing capability enables effortless and diverse creation of original IP content. Furthermore, on Qwen Chat, we designed a series of editing prompts centered around the 16 MBTI personality types. Leveraging these prompts, we successfully created a set of MBTI-themed emoji packs based on our mascot Capybara, effortlessly expanding the IP's reach and expression. ![MBTI meme series](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片4.JPG#center) Moreover, novel view synthesis is another key application scenario in semantic editing. As shown in the two example images below, Qwen-Image-Edit can not only rotate objects by 90 degrees, but also perform a full 180-degree rotation, allowing us to directly see the back side of the object: ![Viewpoint transformation 90 degrees](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片12.JPG#center) ![Viewpoint transformation 180 degrees](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片13.JPG#center) Another typical application of semantic editing is style transfer. For instance, given an input portrait, Qwen-Image-Edit can easily transform it into various artistic styles such as Studio Ghibli. This capability holds significant value in applications like virtual avatar creation: ![Style transfer](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片1.JPG#center) In addition to semantic editing, appearance editing is another common image editing requirement. Appearance editing emphasizes keeping certain regions of the image completely unchanged while adding, removing, or modifying specific elements. The image below illustrates a case where a signboard is added to the scene. As shown, Qwen-Image-Edit not only successfully inserts the signboard but also generates a corresponding reflection, demonstrating exceptional attention to detail. ![Adding a signboard](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片6.JPG#center) Below is another interesting example, demonstrating how to remove fine hair strands and other small objects from an image. ![Removing fine strands of hair](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片7.JPG#center) Additionally, the color of a specific letter "n" in the image can be modified to blue, enabling precise editing of particular elements. ![Modifying text color](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片8.JPG#center) Appearance editing also has wide-ranging applications in scenarios such as adjusting a person's background or changing clothing. The three images below demonstrate these practical use cases respectively. ![Modifying backgrounds](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片11.JPG#center) ![Modifying clothing](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片5.JPG#center) Another standout feature of Qwen-Image-Edit is its accurate text editing capability, which stems from Qwen-Image's deep expertise in text rendering. As shown below, the following two cases vividly demonstrate Qwen-Image-Edit's powerful performance in editing English text: ![Editing English text 1](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片15.JPG#center) ![Editing English text 2](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片16.JPG#center) Qwen-Image-Edit can also directly edit Chinese posters, enabling not only modifications to large headline text but also precise adjustments to even small and intricate text elements. ![Editing Chinese posters](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片17.JPG#center) Finally, let's walk through a concrete image editing example to demonstrate how to use a chained editing approach to progressively correct errors in a calligraphy artwork generated by Qwen-Image: ![Calligraphy artwork](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片18.JPG#center) In this artwork, several Chinese characters contain generation errors. We can leverage Qwen-Image-Edit to correct them step by step. For instance, we can draw bounding boxes on the original image to mark the regions that need correction, instructing Qwen-Image-Edit to fix these specific areas. Here, we want the character "稽" to be correctly written within the red box, and the character "亭" to be accurately rendered in the blue region. ![Correcting characters](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片19.JPG#center) However, in practice, the character "稽" is relatively obscure, and the model fails to correct it correctly in one step. The lower-right component of "稽" should be "旨" rather than "日". At this point, we can further highlight the "日" portion with a red box, instructing Qwen-Image-Edit to fine-tune this detail and replace it with "旨". ![Fine-tuning character](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片20.JPG#center) Isn't it amazing? With this chained, step-by-step editing approach, we can continuously correct character errors until the desired final result is achieved. ![Final version 1](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片21.JPG#center) ![Final version 2](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片22.JPG#center) ![Final version 3](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片23.JPG#center) ![Final version 4](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片24.JPG#center) ![Final version 5](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/edit_en/幻灯片25.JPG#center) Finally, we have successfully obtained a completely correct calligraphy version of *Lantingji Xu (Orchid Pavilion Preface)*! In summary, we hope that Qwen-Image-Edit can further advance the field of image generation, truly lower the technical barriers to visual content creation, and inspire even more innovative applications. ## License Agreement Qwen-Image is licensed under Apache 2.0. ## Citation We kindly encourage citation of our work if you find it useful. ```bibtex @misc{wu2025qwenimagetechnicalreport, title={Qwen-Image Technical Report}, author={Chenfei Wu and Jiahao Li and Jingren Zhou and Junyang Lin and Kaiyuan Gao and Kun Yan and Sheng-ming Yin and Shuai Bai and Xiao Xu and Yilei Chen and Yuxiang Chen and Zecheng Tang and Zekai Zhang and Zhengyi Wang and An Yang and Bowen Yu and Chen Cheng and Dayiheng Liu and Deqing Li and Hang Zhang and Hao Meng and Hu Wei and Jingyuan Ni and Kai Chen and Kuan Cao and Liang Peng and Lin Qu and Minggang Wu and Peng Wang and Shuting Yu and Tingkun Wen and Wensen Feng and Xiaoxiao Xu and Yi Wang and Yichang Zhang and Yongqiang Zhu and Yujia Wu and Yuxuan Cai and Zenan Liu}, year={2025}, eprint={2508.02324}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2508.02324}, } ``` ## Join Us If you're passionate about fundamental research, we're hiring full-time employees (FTEs) and research interns. Don't wait — reach out to us at fulai.hr@alibaba-inc.com
[ "multimodalart/Qwen-Image-Edit-Fast", "Qwen/Qwen-Image-Edit", "zerogpu-aoti/Qwen-Image-Edit-Relight", "zerogpu-aoti/Qwen-Image-Edit-Outpaint", "llamameta/nano-banana-experimental", "zerogpu-aoti/Qwen-Image-Edit-Multi-Image", "bep40/Nano-Banana", "LPX55/Qwen-Image-Edit_Fast-Presets", "VirtualKimi/Nano-Banana", "ginigen/Nano-Banana-PRO", "reallifeadi/Qwen-Qwen-Image-Edit", "aiqtech/kofaceid", "wavespeed/qwen-edit-image", "zerogpu-aoti/Qwen-Image-Edit-aot-dynamic-fa3-fix-cfg", "nazdridoy/inferoxy-hub", "RAMASocute/Qwen-Qwen-Image-Edit", "umint/o4-mini", "xbai4680/sdsadsad", "wavespeed/Qwen-Image-Edit", "dangthr/Qwen-Image-Edit", "TopGeneralDeng/Qwen-Qwen-Image-Edit", "jacobcrowww/Qwen-Qwen-Image-Edit", "cku9790/Qwen-Qwen-Image-Edit", "TerrenceY/Qwen-Qwen-Image-Edit", "JonathanZouari/Qwen-Qwen-Image-Edit", "hassan1x/Qwen-Qwen-Image-Edit", "LLMhacker/Qwen-Image-Edit-Fast", "affgg/Qwen-Qwen-Image-Edit", "SinniDcat/Qwen-Qwen-Image-Edit", "jalhaq82/Qwen-Qwen-Image-Edit", "LLMhacker/Qwen-Image-Edit", "fengxingwei/Qwen-Qwen-Image-Edit", "rectmedia/Qwen-Qwen-Image-Edit", "ReallyFloppyPenguin/Qwen-Qwen-Image-Edit", "adrawn/Qwen-Qwen-Image-Edit", "VirtualKimi/Qwen-Image-Edit-Fast", "MindCraft24729/Qwen-Image-Edit", "jinwu76/Qwen-Qwen-Image-Edit", "Muyumba/Qwen-Qwen-Image-Edit", "FanArtFuseBeads/Qwen-Qwen-Image-Edit", "qwer555/Qwen-Qwen-Image-Edit", "DarwinPRR/Qwen-Qwen-Image-Edit", "baicy/Qwen-Qwen-Image-Edit", "sununy/ff", "mrbui1990/Qwen-Image-Edit-Fast", "AbdelhamedJr/Qwen-Qwen-Image-Edit", "t3llo/Qwen-Qwen-Image-Edit", "Vutony/Qwen-Qwen-Image-Edit", "Usbebdhndejkss/Qwen-Qwen-Image-Edit", "HumorBuddy/Qwen-Qwen-Image-Edit", "racerx916/Qwen-Qwen-Image-Edit", "WasabiPLP/Qwen-Qwen-Image-Edit", "rohanmiriyala/Qwen-Qwen-Image-Edit", "R127/Qwen-Qwen-Image-Edit", "xiaowuzi/Qwen-Qwen-Image-Edit", "ackpro789/Qwen-Qwen-Image-Edit", "Gvqlo10c/Qwen-Qwen-Image-Edit", "Mehdidib/Qwen-Qwen-Image-Edit", "felipk/Qwen-Qwen-Image-Edit", "fearslayer45/Qwen-Qwen-Image-Edit", "gptken/Qwen-Qwen-Image-Edit", "miangusapa/Qwen-Qwen-Image-Edit", "tchung1970/Qwen-Image-Edit", "alis9974/Qwen-Image-Edit2", "cssddnnc/Qwen-Qwen-Image-Edit", "aichimaodeyu/Qwen-Qwen-Image-Edit", "MohanaDeepan/Qwen-Qwen-Image-Edit", "Vigesvikes/Qwen-Qwen-Image-Edit", "cbensimon/Qwen-Image-Edit-aot-dynamic-fa3", "ASHWINI66929/Qwen-Qwen-Image-Edit", "burtenshaw/Qwen-Image-Edit-MCP", "itdog-max/Qwen-Qwen-Image-Edit", "wakozee/Qwen-Qwen-Image-Edit", "Sudharsannn/Qwen-Qwen-Image-Edit", "kkvipvip/Qwen-Qwen-Image-Edit", "stealthify/nano-banana-exp-image-edit", "silvanin/Qwen-Qwen-Image-Edit", "yuxingxing/Qwen-Qwen-Image-Edit", "mgbam/yeye", "Falln87/Qwen_Image_Suite", "Margh0330/Qwen-Qwen-Image-Edit", "einarhre/viswiz", "Idusha/Qwen-Qwen-Image-Edit", "rahulxcr/Qwen-Image-Edit", "sunny1997/Qwen-Image-Edit-Fast", "pmau45/Qwen-Qwen-Image-Edit", "datxy/Qwen-Image-Edit-Fast", "VegaLing/Vega-Qwen-Qwen-Image-Edit", "inggaro/Qwen-Qwen-Image-Edit", "dlschad/Qwen-Qwen-Image-Edit", "zzhc/Qwen-Qwen-Image-Edit", "Love680/Qwen-Qwen-Image-Edit", "arturono/Qwen-Qwen-Image-Edit", "umint/gpt-4.1-nano", "umint/o3", "Rahul-KJS/Qwen-Qwen-Image-Edit", "Framill/Qwen-Qwen-Image-Edit", "Nvra/Qwen-Qwen-Image-Edit", "Avinashthehulk/Qwen-Qwen-Image-Edit", "pistonX/Qwen-Qwen-Image-Edit", "sormunir/Qwen-Qwen-Image-Edit", "bep40/Qwen-Image-Edit-Multi-Image", "marie11110/Qwen-Qwen-Image-Edit", "chengzhigang/Qwen-Image-Edit_Fast-Presets01", "chengzhigang/Qwen-Image-Edit-Fast-02", "Rahul-KJS/cartoonize" ]
[ "apache-2.0" ]
null
[ "en", "zh" ]
null
null
[ "image-to-image" ]
null
null
[ "vision" ]
[ "image" ]
[ "image" ]
team
company
[ "China" ]
null
null
null
null
null
null
null
null
null
68abccbf1935e46075b39df2
Wan-AI/Wan2.2-S2V-14B
Wan-AI
null
9,959
9,959
False
2025-08-25T02:38:55
2025-08-28T02:36:24
diffusers
197
197
null
null
null
[ ".gitattributes", "README.md", "Wan2.1_VAE.pth", "assets/471504690-b63bfa58-d5d7-4de6-a1a2-98970b06d9a7.mp4", "assets/comp_effic.png", "assets/logo.png", "assets/moe_2.png", "assets/moe_arch.png", "assets/performance.png", "assets/vae.png", "config.json", "configuration.json", "diffusion_pytorch_model-00001-of-00004.safetensors", "diffusion_pytorch_model-00002-of-00004.safetensors", "diffusion_pytorch_model-00003-of-00004.safetensors", "diffusion_pytorch_model-00004-of-00004.safetensors", "diffusion_pytorch_model.safetensors.index.json", "google/umt5-xxl/special_tokens_map.json", "google/umt5-xxl/spiece.model", "google/umt5-xxl/tokenizer.json", "google/umt5-xxl/tokenizer_config.json", "models_t5_umt5-xxl-enc-bf16.pth", "wav2vec2-large-xlsr-53-english/.msc", "wav2vec2-large-xlsr-53-english/.mv", "wav2vec2-large-xlsr-53-english/README.md", "wav2vec2-large-xlsr-53-english/alphabet.json", "wav2vec2-large-xlsr-53-english/config.json", "wav2vec2-large-xlsr-53-english/configuration.json", "wav2vec2-large-xlsr-53-english/eval.py", "wav2vec2-large-xlsr-53-english/flax_model.msgpack", "wav2vec2-large-xlsr-53-english/full_eval.sh", "wav2vec2-large-xlsr-53-english/language_model/attrs.json", "wav2vec2-large-xlsr-53-english/language_model/lm.binary", "wav2vec2-large-xlsr-53-english/language_model/unigrams.txt", "wav2vec2-large-xlsr-53-english/log_mozilla-foundation_common_voice_6_0_en_test_predictions.txt", "wav2vec2-large-xlsr-53-english/log_mozilla-foundation_common_voice_6_0_en_test_predictions_greedy.txt", "wav2vec2-large-xlsr-53-english/log_mozilla-foundation_common_voice_6_0_en_test_targets.txt", "wav2vec2-large-xlsr-53-english/log_speech-recognition-community-v2_dev_data_en_validation_predictions.txt", "wav2vec2-large-xlsr-53-english/log_speech-recognition-community-v2_dev_data_en_validation_predictions_greedy.txt", "wav2vec2-large-xlsr-53-english/log_speech-recognition-community-v2_dev_data_en_validation_targets.txt", "wav2vec2-large-xlsr-53-english/model.safetensors", "wav2vec2-large-xlsr-53-english/mozilla-foundation_common_voice_6_0_en_test_eval_results.txt", "wav2vec2-large-xlsr-53-english/mozilla-foundation_common_voice_6_0_en_test_eval_results_greedy.txt", "wav2vec2-large-xlsr-53-english/preprocessor_config.json", "wav2vec2-large-xlsr-53-english/pytorch_model.bin", "wav2vec2-large-xlsr-53-english/special_tokens_map.json", "wav2vec2-large-xlsr-53-english/speech-recognition-community-v2_dev_data_en_validation_eval_results.txt", "wav2vec2-large-xlsr-53-english/speech-recognition-community-v2_dev_data_en_validation_eval_results_greedy.txt", "wav2vec2-large-xlsr-53-english/vocab.json" ]
[ 1300, 18697, 507609880, 9193286, 202156, 56322, 527914, 74900, 306535, 165486, 890, 43, 9968229352, 9891539248, 9956985634, 2774887624, 113150, 6623, 4548313, 16837417, 61728, 11361920418, 2328, 36, 5327, 200, 1531, 86, 6198, 1261905572, 1372, 78, 862913451, 3509871, 924339, 925177, 932146, 130354, 130796, 131489, 1261942732, 48, 49, 262, 1262069143, 85, 48, 49, 300 ]
49,148,819,983
eff0178482d4d6e1fed7763f6c3b3f480be908c0
[ "diffusers", "safetensors", "s2v", "arxiv:2503.20314", "arxiv:2508.18621", "license:apache-2.0", "region:us" ]
null
# Wan2.2 <p align="center"> <img src="assets/logo.png" width="400"/> <p> <p align="center"> 💜 <a href="https://wan.video"><b>Wan</b></a> &nbsp&nbsp | &nbsp&nbsp 🖥️ <a href="https://github.com/Wan-Video/Wan2.2">GitHub</a> &nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://arxiv.org/abs/2503.20314">Paper</a> &nbsp&nbsp | &nbsp&nbsp 📑 <a href="https://wan.video/welcome?spm=a2ty_o02.30011076.0.0.6c9ee41eCcluqg">Blog</a> &nbsp&nbsp | &nbsp&nbsp 💬 <a href="https://discord.gg/AKNgpMK4Yj">Discord</a>&nbsp&nbsp <br> 📕 <a href="https://alidocs.dingtalk.com/i/nodes/jb9Y4gmKWrx9eo4dCql9LlbYJGXn6lpz">使用指南(中文)</a>&nbsp&nbsp | &nbsp&nbsp 📘 <a href="https://alidocs.dingtalk.com/i/nodes/EpGBa2Lm8aZxe5myC99MelA2WgN7R35y">User Guide(English)</a>&nbsp&nbsp | &nbsp&nbsp💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat(微信)</a>&nbsp&nbsp <br> ----- [**Wan: Open and Advanced Large-Scale Video Generative Models**](https://arxiv.org/abs/2503.20314) <be> We are excited to introduce **Wan2.2**, a major upgrade to our foundational video models. With **Wan2.2**, we have focused on incorporating the following innovations: - 👍 **Effective MoE Architecture**: Wan2.2 introduces a Mixture-of-Experts (MoE) architecture into video diffusion models. By separating the denoising process cross timesteps with specialized powerful expert models, this enlarges the overall model capacity while maintaining the same computational cost. - 👍 **Cinematic-level Aesthetics**: Wan2.2 incorporates meticulously curated aesthetic data, complete with detailed labels for lighting, composition, contrast, color tone, and more. This allows for more precise and controllable cinematic style generation, facilitating the creation of videos with customizable aesthetic preferences. - 👍 **Complex Motion Generation**: Compared to Wan2.1, Wan2.2 is trained on a significantly larger data, with +65.6% more images and +83.2% more videos. This expansion notably enhances the model's generalization across multiple dimensions such as motions, semantics, and aesthetics, achieving TOP performance among all open-sourced and closed-sourced models. - 👍 **Efficient High-Definition Hybrid TI2V**: Wan2.2 open-sources a 5B model built with our advanced Wan2.2-VAE that achieves a compression ratio of **16×16×4**. This model supports both text-to-video and image-to-video generation at 720P resolution with 24fps and can also run on consumer-grade graphics cards like 4090. It is one of the fastest **720P@24fps** models currently available, capable of serving both the industrial and academic sectors simultaneously. ## Video Demos <div align="center"> <video width="80%" controls> <source src="https://cloud.video.taobao.com/vod/4szTT1B0LqXvJzmuEURfGRA-nllnqN_G2AT0ZWkQXoQ.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </div> ## 🔥 Latest News!! * Aug 26, 2025: 🎵 We introduce **[Wan2.2-S2V-14B](https://humanaigc.github.io/wan-s2v-webpage)**, an audio-driven cinematic video generation model, including [inference code](#run-speech-to-video-generation), [model weights](#model-download), and [technical report](https://humanaigc.github.io/wan-s2v-webpage/content/wan-s2v.pdf)! Now you can try it on [wan.video](https://wan.video/), [ModelScope Gradio](https://www.modelscope.cn/studios/Wan-AI/Wan2.2-S2V) or [HuggingFace Gradio](https://huggingface.co/spaces/Wan-AI/Wan2.2-S2V)! * Jul 28, 2025: 👋 We have open a [HF space](https://huggingface.co/spaces/Wan-AI/Wan-2.2-5B) using the TI2V-5B model. Enjoy! * Jul 28, 2025: 👋 Wan2.2 has been integrated into ComfyUI ([CN](https://docs.comfy.org/zh-CN/tutorials/video/wan/wan2_2) | [EN](https://docs.comfy.org/tutorials/video/wan/wan2_2)). Enjoy! * Jul 28, 2025: 👋 Wan2.2's T2V, I2V and TI2V have been integrated into Diffusers ([T2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers) | [I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers) | [TI2V-5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers)). Feel free to give it a try! * Jul 28, 2025: 👋 We've released the inference code and model weights of **Wan2.2**. ## Community Works If your research or project builds upon [**Wan2.1**](https://github.com/Wan-Video/Wan2.1) or [**Wan2.2**](https://github.com/Wan-Video/Wan2.2), and you would like more people to see it, please inform us. - [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) provides comprehensive support for Wan 2.2, including low-GPU-memory layer-by-layer offload, FP8 quantization, sequence parallelism, LoRA training, full training. - [Kijai's ComfyUI WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper) is an alternative implementation of Wan models for ComfyUI. Thanks to its Wan-only focus, it's on the frontline of getting cutting edge optimizations and hot research features, which are often hard to integrate into ComfyUI quickly due to its more rigid structure. ## 📑 Todo List - Wan2.2-S2V Speech-to-Video - [x] Inference code of Wan2.2-S2V - [x] Checkpoints of Wan2.2-S2V-14B - [ ] ComfyUI integration - [ ] Diffusers integration ## Run Wan2.2 #### Installation Clone the repo: ```sh git clone https://github.com/Wan-Video/Wan2.2.git cd Wan2.2 ``` Install dependencies: ```sh # Ensure torch >= 2.4.0 # If the installation of `flash_attn` fails, try installing the other packages first and install `flash_attn` last pip install -r requirements.txt ``` #### Model Download | Models | Download Links | Description | |--------------------|---------------------------------------------------------------------------------------------------------------------------------------------|-------------| | T2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-T2V-A14B) | Text-to-Video MoE model, supports 480P & 720P | | I2V-A14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-I2V-A14B) | Image-to-Video MoE model, supports 480P & 720P | | TI2V-5B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-TI2V-5B) | High-compression VAE, T2V+I2V, supports 720P | | S2V-14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.2-S2V-14B) 🤖 [ModelScope](https://modelscope.cn/models/Wan-AI/Wan2.2-S2V-14B) | Speech-to-Video model, supports 480P & 720P | Download models using huggingface-cli: ``` sh pip install "huggingface_hub[cli]" huggingface-cli download Wan-AI/Wan2.2-S2V-14B --local-dir ./Wan2.2-S2V-14B ``` Download models using modelscope-cli: ``` sh pip install modelscope modelscope download Wan-AI/Wan2.2-S2V-14B --local_dir ./Wan2.2-S2V-14B ``` #### Run Speech-to-Video Generation This repository supports the `Wan2.2-S2V-14B` Speech-to-Video model and can simultaneously support video generation at 480P and 720P resolutions. - Single-GPU Speech-to-Video inference ```sh python generate.py --task s2v-14B --size 1024*704 --ckpt_dir ./Wan2.2-S2V-14B/ --offload_model True --convert_model_dtype --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard." --image "examples/i2v_input.JPG" --audio "examples/talk.wav" # Without setting --num_clip, the generated video length will automatically adjust based on the input audio length ``` > 💡 This command can run on a GPU with at least 80GB VRAM. - Multi-GPU inference using FSDP + DeepSpeed Ulysses ```sh torchrun --nproc_per_node=8 generate.py --task s2v-14B --size 1024*704 --ckpt_dir ./Wan2.2-S2V-14B/ --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard." --image "examples/i2v_input.JPG" --audio "examples/talk.wav" ``` - Pose + Audio driven generation ```sh torchrun --nproc_per_node=8 generate.py --task s2v-14B --size 1024*704 --ckpt_dir ./Wan2.2-S2V-14B/ --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "a person is singing" --image "examples/pose.png" --audio "examples/sing.MP3" --pose_video "./examples/pose.mp4" ``` > 💡For the Speech-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image. > 💡The model can generate videos from audio input combined with reference image and optional text prompt. > 💡The `--pose_video` parameter enables pose-driven generation, allowing the model to follow specific pose sequences while generating videos synchronized with audio input. > 💡The `--num_clip` parameter controls the number of video clips generated, useful for quick preview with shorter generation time. ## Computational Efficiency on Different GPUs We test the computational efficiency of different **Wan2.2** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**. <div align="center"> <img src="assets/comp_effic.png" alt="" style="width: 80%;" /> </div> > The parameter settings for the tests presented in this table are as follows: > (1) Multi-GPU: 14B: `--ulysses_size 4/8 --dit_fsdp --t5_fsdp`, 5B: `--ulysses_size 4/8 --offload_model True --convert_model_dtype --t5_cpu`; Single-GPU: 14B: `--offload_model True --convert_model_dtype`, 5B: `--offload_model True --convert_model_dtype --t5_cpu` (--convert_model_dtype converts model parameter types to config.param_dtype); > (2) The distributed testing utilizes the built-in FSDP and Ulysses implementations, with FlashAttention3 deployed on Hopper architecture GPUs; > (3) Tests were run without the `--use_prompt_extend` flag; > (4) Reported results are the average of multiple samples taken after the warm-up phase. ------- ## Introduction of Wan2.2 **Wan2.2** builds on the foundation of Wan2.1 with notable improvements in generation quality and model capability. This upgrade is driven by a series of key technical innovations, mainly including the Mixture-of-Experts (MoE) architecture, upgraded training data, and high-compression video generation. ##### (1) Mixture-of-Experts (MoE) Architecture Wan2.2 introduces Mixture-of-Experts (MoE) architecture into the video generation diffusion model. MoE has been widely validated in large language models as an efficient approach to increase total model parameters while keeping inference cost nearly unchanged. In Wan2.2, the A14B model series adopts a two-expert design tailored to the denoising process of diffusion models: a high-noise expert for the early stages, focusing on overall layout; and a low-noise expert for the later stages, refining video details. Each expert model has about 14B parameters, resulting in a total of 27B parameters but only 14B active parameters per step, keeping inference computation and GPU memory nearly unchanged. <div align="center"> <img src="assets/moe_arch.png" alt="" style="width: 90%;" /> </div> The transition point between the two experts is determined by the signal-to-noise ratio (SNR), a metric that decreases monotonically as the denoising step $t$ increases. At the beginning of the denoising process, $t$ is large and the noise level is high, so the SNR is at its minimum, denoted as ${SNR}_{min}$. In this stage, the high-noise expert is activated. We define a threshold step ${t}_{moe}$ corresponding to half of the ${SNR}_{min}$, and switch to the low-noise expert when $t<{t}_{moe}$. <div align="center"> <img src="assets/moe_2.png" alt="" style="width: 90%;" /> </div> To validate the effectiveness of the MoE architecture, four settings are compared based on their validation loss curves. The baseline **Wan2.1** model does not employ the MoE architecture. Among the MoE-based variants, the **Wan2.1 & High-Noise Expert** reuses the Wan2.1 model as the low-noise expert while uses the Wan2.2's high-noise expert, while the **Wan2.1 & Low-Noise Expert** uses Wan2.1 as the high-noise expert and employ the Wan2.2's low-noise expert. The **Wan2.2 (MoE)** (our final version) achieves the lowest validation loss, indicating that its generated video distribution is closest to ground-truth and exhibits superior convergence. ##### (2) Efficient High-Definition Hybrid TI2V To enable more efficient deployment, Wan2.2 also explores a high-compression design. In addition to the 27B MoE models, a 5B dense model, i.e., TI2V-5B, is released. It is supported by a high-compression Wan2.2-VAE, which achieves a $T\times H\times W$ compression ratio of $4\times16\times16$, increasing the overall compression rate to 64 while maintaining high-quality video reconstruction. With an additional patchification layer, the total compression ratio of TI2V-5B reaches $4\times32\times32$. Without specific optimization, TI2V-5B can generate a 5-second 720P video in under 9 minutes on a single consumer-grade GPU, ranking among the fastest 720P@24fps video generation models. This model also natively supports both text-to-video and image-to-video tasks within a single unified framework, covering both academic research and practical applications. <div align="center"> <img src="assets/vae.png" alt="" style="width: 80%;" /> </div> ##### Comparisons to SOTAs We compared Wan2.2 with leading closed-source commercial models on our new Wan-Bench 2.0, evaluating performance across multiple crucial dimensions. The results demonstrate that Wan2.2 achieves superior performance compared to these leading models. <div align="center"> <img src="assets/performance.png" alt="" style="width: 90%;" /> </div> ## Citation If you find our work helpful, please cite us. ``` @article{wan2025, title={Wan: Open and Advanced Large-Scale Video Generative Models}, author={Team Wan and Ang Wang and Baole Ai and Bin Wen and Chaojie Mao and Chen-Wei Xie and Di Chen and Feiwu Yu and Haiming Zhao and Jianxiao Yang and Jianyuan Zeng and Jiayu Wang and Jingfeng Zhang and Jingren Zhou and Jinkai Wang and Jixuan Chen and Kai Zhu and Kang Zhao and Keyu Yan and Lianghua Huang and Mengyang Feng and Ningyi Zhang and Pandeng Li and Pingyu Wu and Ruihang Chu and Ruili Feng and Shiwei Zhang and Siyang Sun and Tao Fang and Tianxing Wang and Tianyi Gui and Tingyu Weng and Tong Shen and Wei Lin and Wei Wang and Wei Wang and Wenmeng Zhou and Wente Wang and Wenting Shen and Wenyuan Yu and Xianzhong Shi and Xiaoming Huang and Xin Xu and Yan Kou and Yangyu Lv and Yifei Li and Yijing Liu and Yiming Wang and Yingya Zhang and Yitong Huang and Yong Li and You Wu and Yu Liu and Yulin Pan and Yun Zheng and Yuntao Hong and Yupeng Shi and Yutong Feng and Zeyinzi Jiang and Zhen Han and Zhi-Fan Wu and Ziyu Liu}, journal = {arXiv preprint arXiv:2503.20314}, year={2025} } @article{wan2025s2v, title={Wan-S2V:Audio-Driven Cinematic Video Generation}, author={Xin Gao, Li Hu, Siqi Hu, Mingyang Huang, Chaonan Ji, Dechao Meng, Jinwei Qi, Penchong Qiao, Zhen Shen, Yafei Song, Ke Sun, Linrui Tian, Guangyuan Wang, Qi Wang, Zhongjian Wang, Jiayu Xiao, Sheng Xu, Bang Zhang, Peng Zhang, Xindi Zhang, Zhe Zhang, Jingren Zhou, Lian Zhuo}, journal={arXiv preprint arXiv:2508.18621}, year={2025} } ``` ## License Agreement The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generated contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt). ## Acknowledgements We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research. ## Contact Us If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/AKNgpMK4Yj) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
[ "Wan-AI/Wan2.2-S2V", "mjinabq/Wan2.2-S2V", "opparco/Wan2.2-S2V", "ItsMpilo/Wan2.2-S2V" ]
[ "apache-2.0" ]
null
null
null
null
null
null
[ "s2v" ]
null
null
null
free
company
[ "China" ]
null
null
null
null
null
null
null
null
null
688a4ad0a0c7bbd72715e857
Phr00t/WAN2.2-14B-Rapid-AllInOne
Phr00t
{ "models": [ { "_id": "6881e60ffcffaee6d84fe9e4", "id": "Wan-AI/Wan2.2-I2V-A14B" } ], "relation": "finetune" }
0
0
False
2025-07-30T16:39:44
2025-08-23T23:51:11
wan2.2
494
166
null
image-to-video
null
[ ".gitattributes", "README.md", "v2/wan2.2-i2v-aio-v2.safetensors", "v2/wan2.2-t2v-aio-v2.safetensors", "v3/wan2.2-i2v-rapid-aio-540p-v3.safetensors", "v3/wan2.2-i2v-rapid-aio-720p-v3.safetensors", "v3/wan2.2-t2v-rapid-aio-v3.safetensors", "v4/wan2.2-i2v-rapid-aio-v4.safetensors", "v4/wan2.2-t2v-rapid-aio-v4.safetensors", "v5/wan2.2-i2v-rapid-aio-v5.safetensors", "v6/.placeholder", "v6/wan2.2-i2v-rapid-aio-v6.safetensors", "v6/wan2.2-t2v-rapid-aio-v6.safetensors", "v7/.read_model_card", "v7/wan2.2-i2v-rapid-aio-nsfw-v7.safetensors", "v7/wan2.2-i2v-rapid-aio-v7.safetensors", "v7/wan2.2-t2v-rapid-aio-nsfw-v7.safetensors", "v8/wan2.2-i2v-rapid-aio-nsfw-v8.safetensors", "v8/wan2.2-i2v-rapid-aio-v8.safetensors", "v8/wan2.2-t2v-rapid-aio-v8.1.safetensors", "v8/wan2.2-t2v-rapid-aio-v8.safetensors", "v9/wan2.2-i2v-rapid-aio-nsfw-v9.2.safetensors", "v9/wan2.2-i2v-rapid-aio-v9.safetensors", "v9/wan2.2-t2v-rapid-aio-nsfw-v9.2.safetensors", "v9/wan2.2-t2v-rapid-aio-v9.safetensors", "wan2.2-i2v-rapid-aio-example.json", "wan2.2-i2v-rapid-aio.safetensors", "wan2.2-t2v-rapid-aio-example.json", "wan2.2-t2v-rapid-aio.safetensors" ]
null
null
6c7be992d665858c886ad1c7791b7a83db2478c1
[ "wan2.2", "wan", "accelerator", "image-to-video", "base_model:Wan-AI/Wan2.2-I2V-A14B", "base_model:finetune:Wan-AI/Wan2.2-I2V-A14B", "region:us" ]
null
These are mixtures of WAN 2.2 and other WAN-like models and accelerators (with CLIP and VAE also included) to provide a fast, "all in one" solution for making videos as easily and quickly as possible. FP8 precision. Generally the latest version available for each type of model (image to video or text to video) is recommended. **NSFW Merges:** Degenerates should steer clear of these merges, as they are only for the most civilized people of culture or scientific researchers. These merge various spicy WAN 2.1+2.2 LORAs at generally low strengths to provide a "jack of all trades, master of none" all in one despicable solution. If you are not getting the results you want, add more LORAs or just use the non-NSFW versions with hand-picked LORAs. You just need to use the basic ComfyUI "Load Checkpoint" node with these, as you can take the VAE, CLIP and Model all from one AIO safetensors (saved in your 'checkpoints' folder). All models are intended to use 1 CFG and 4 steps. See sampler recommendations for each version below. WAN 2.1 LORA compatibility is generally still good, along with "low noise" WAN 2.2 LORA compatibility (do not use "high noise" LORAs). You might need to adjust LORA strengths (up or down) to get results you want, though. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/t_SxUFP9oyNz0C8dj6jze.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/GNDAWnRHAjt8vPY0wXNTq.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/F3tB7EhHMS1Gn-7iplmV8.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/70X-8YUbn5hPogrG5V8Kv.png) Seems to work even on 8GB VRAM: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/i4NRFi7FX_j7FUZyvmImw.png) **CHANGELOG/VERSIONS:** **base:** This is the first attempt and very "stable", but mostly WAN 2.1 with few WAN 2.2 features. sa_solver recommended. **V2:** This is a more dynamic mixture with more WAN 2.2 features. sa_solver OR euler_a sampler recommended. Suffers from minor color shifts and noise in I2V, typically just at the start. **V3:** This is a mixture of SkyReels and WAN 2.2, which should improve prompt adherence and quality. euler_a sampler recommended, beta scheduler. Suffers from minor color shifts and noise in I2V, typically just at the start. **V4:** WAN 2.2 Lightning in the mix! euler_a/beta recommended. I2V noise and color shifting generally improved, but motion is a bit overexaggerated. **V5:** Improved overexaggeration of I2V model. euler_a/beta recommended. **V6:** New merging structure and overall significantly improved quality. I2V noise for the first 1-2 frames still exists, but it clears up much better than previous versions. Some WAN 2.1 LORAs at heavy strengths may cause up to 5 poor early frames with T2V, where discarding (or lowering strengths) may help. sa_solver/beta recommended. I2V rarely suffers from some dramatic scene shifts. **V7:** I2V scene shifting should be fixed, but some I2V noise persists (generally for just the first 1-2 frames). No changes needed for the T2V model, so that remains at V6. sa_solver/beta recommended. **V8:** T2V is now based entirely off of WAN 2.2 "low" (with PUSA, SkyReels and Lightning accelerators mixed in), which should resolve noise problems with it (8.1 adds more SkyReels). I2V scaled back some of the WAN 2.2 mix, which was contributing to noise problems. There still is some minor I2V noise, but more of a delicate balance of WAN 2.2 + SkyReels to keep decent motion and flexibility. Euler_a/beta recommended. **V9:** Removed PUSA and SkyReels from the WAN 2.2-side of I2V (and completely from T2V). as I think PUSA/SkyReels wasn't consistently helping (and sometimes hurting) when applied to WAN 2.2. This should provide a more reliable base to work from. **euler_a/beta** recommended, but feel free to experiment with sa_solver/beta or others! Looking for GGUFs? Looks like DooFY87 on CivitAI has been doing that: https://civitai.com/models/1855105/rapid-wan-22-i2v-gguf Looking for FP16 precision? TekeshiX has been helping me build variants in FP16 format. These should be the V5 I2V model: https://huggingface.co/TekeshiX/RAPID-AIO-FP16/tree/main **DISCLAIMER:** As you may expect, some compromises had to be made to reach this level of speed and simplicity. If you want more complex workflows and longer generation times to run "full WAN 2.2"'s pair of models (which will give higher quality results), or control over accelerator LORAs included in this merge, there are many resources elsewhere to do that.
null
null
null
null
null
null
[ "image-to-video" ]
null
null
[ "vision" ]
[ "text", "image" ]
[ "video" ]
user
user
[ "user" ]
null
null
null
null
null
null
null
null
null
68a686808e8db90f8998697a
deepseek-ai/DeepSeek-V3.1
deepseek-ai
null
76,644
76,644
False
2025-08-21T02:37:52
2025-08-26T08:14:11
transformers
668
163
null
text-generation
{"parameters": {"BF16": 3918786560, "F8_E4M3": 680571043840, "F32": 41555600}, "total": 684531386000}
[ ".gitattributes", "LICENSE", "README.md", "assets/chat_template.jinja", "assets/code_agent_trajectory.html", "assets/search_python_tool_trajectory.html", "assets/search_tool_trajectory.html", "config.json", "configuration_deepseek.py", "generation_config.json", "model-00001-of-000163.safetensors", "model-00002-of-000163.safetensors", "model-00003-of-000163.safetensors", "model-00004-of-000163.safetensors", "model-00005-of-000163.safetensors", "model-00006-of-000163.safetensors", "model-00007-of-000163.safetensors", "model-00008-of-000163.safetensors", "model-00009-of-000163.safetensors", "model-00010-of-000163.safetensors", "model-00011-of-000163.safetensors", "model-00012-of-000163.safetensors", "model-00013-of-000163.safetensors", "model-00014-of-000163.safetensors", "model-00015-of-000163.safetensors", "model-00016-of-000163.safetensors", "model-00017-of-000163.safetensors", "model-00018-of-000163.safetensors", "model-00019-of-000163.safetensors", "model-00020-of-000163.safetensors", "model-00021-of-000163.safetensors", "model-00022-of-000163.safetensors", "model-00023-of-000163.safetensors", "model-00024-of-000163.safetensors", "model-00025-of-000163.safetensors", "model-00026-of-000163.safetensors", "model-00027-of-000163.safetensors", "model-00028-of-000163.safetensors", "model-00029-of-000163.safetensors", "model-00030-of-000163.safetensors", "model-00031-of-000163.safetensors", "model-00032-of-000163.safetensors", "model-00033-of-000163.safetensors", "model-00034-of-000163.safetensors", "model-00035-of-000163.safetensors", "model-00036-of-000163.safetensors", "model-00037-of-000163.safetensors", "model-00038-of-000163.safetensors", "model-00039-of-000163.safetensors", "model-00040-of-000163.safetensors", "model-00041-of-000163.safetensors", "model-00042-of-000163.safetensors", "model-00043-of-000163.safetensors", "model-00044-of-000163.safetensors", "model-00045-of-000163.safetensors", "model-00046-of-000163.safetensors", "model-00047-of-000163.safetensors", "model-00048-of-000163.safetensors", "model-00049-of-000163.safetensors", "model-00050-of-000163.safetensors", "model-00051-of-000163.safetensors", "model-00052-of-000163.safetensors", "model-00053-of-000163.safetensors", "model-00054-of-000163.safetensors", "model-00055-of-000163.safetensors", "model-00056-of-000163.safetensors", "model-00057-of-000163.safetensors", "model-00058-of-000163.safetensors", "model-00059-of-000163.safetensors", "model-00060-of-000163.safetensors", "model-00061-of-000163.safetensors", "model-00062-of-000163.safetensors", "model-00063-of-000163.safetensors", "model-00064-of-000163.safetensors", "model-00065-of-000163.safetensors", "model-00066-of-000163.safetensors", "model-00067-of-000163.safetensors", "model-00068-of-000163.safetensors", "model-00069-of-000163.safetensors", "model-00070-of-000163.safetensors", "model-00071-of-000163.safetensors", "model-00072-of-000163.safetensors", "model-00073-of-000163.safetensors", "model-00074-of-000163.safetensors", "model-00075-of-000163.safetensors", "model-00076-of-000163.safetensors", "model-00077-of-000163.safetensors", "model-00078-of-000163.safetensors", "model-00079-of-000163.safetensors", "model-00080-of-000163.safetensors", "model-00081-of-000163.safetensors", "model-00082-of-000163.safetensors", "model-00083-of-000163.safetensors", "model-00084-of-000163.safetensors", "model-00085-of-000163.safetensors", "model-00086-of-000163.safetensors", "model-00087-of-000163.safetensors", "model-00088-of-000163.safetensors", "model-00089-of-000163.safetensors", "model-00090-of-000163.safetensors", "model-00091-of-000163.safetensors", "model-00092-of-000163.safetensors", "model-00093-of-000163.safetensors", "model-00094-of-000163.safetensors", "model-00095-of-000163.safetensors", "model-00096-of-000163.safetensors", "model-00097-of-000163.safetensors", "model-00098-of-000163.safetensors", "model-00099-of-000163.safetensors", "model-00100-of-000163.safetensors", "model-00101-of-000163.safetensors", "model-00102-of-000163.safetensors", "model-00103-of-000163.safetensors", "model-00104-of-000163.safetensors", "model-00105-of-000163.safetensors", "model-00106-of-000163.safetensors", "model-00107-of-000163.safetensors", "model-00108-of-000163.safetensors", "model-00109-of-000163.safetensors", "model-00110-of-000163.safetensors", "model-00111-of-000163.safetensors", "model-00112-of-000163.safetensors", "model-00113-of-000163.safetensors", "model-00114-of-000163.safetensors", "model-00115-of-000163.safetensors", "model-00116-of-000163.safetensors", "model-00117-of-000163.safetensors", "model-00118-of-000163.safetensors", "model-00119-of-000163.safetensors", "model-00120-of-000163.safetensors", "model-00121-of-000163.safetensors", "model-00122-of-000163.safetensors", "model-00123-of-000163.safetensors", "model-00124-of-000163.safetensors", "model-00125-of-000163.safetensors", "model-00126-of-000163.safetensors", "model-00127-of-000163.safetensors", "model-00128-of-000163.safetensors", "model-00129-of-000163.safetensors", "model-00130-of-000163.safetensors", "model-00131-of-000163.safetensors", "model-00132-of-000163.safetensors", "model-00133-of-000163.safetensors", "model-00134-of-000163.safetensors", "model-00135-of-000163.safetensors", "model-00136-of-000163.safetensors", "model-00137-of-000163.safetensors", "model-00138-of-000163.safetensors", "model-00139-of-000163.safetensors", "model-00140-of-000163.safetensors", "model-00141-of-000163.safetensors", "model-00142-of-000163.safetensors", "model-00143-of-000163.safetensors", "model-00144-of-000163.safetensors", "model-00145-of-000163.safetensors", "model-00146-of-000163.safetensors", "model-00147-of-000163.safetensors", "model-00148-of-000163.safetensors", "model-00149-of-000163.safetensors", "model-00150-of-000163.safetensors", "model-00151-of-000163.safetensors", "model-00152-of-000163.safetensors", "model-00153-of-000163.safetensors", "model-00154-of-000163.safetensors", "model-00155-of-000163.safetensors", "model-00156-of-000163.safetensors", "model-00157-of-000163.safetensors", "model-00158-of-000163.safetensors", "model-00159-of-000163.safetensors", "model-00160-of-000163.safetensors", "model-00161-of-000163.safetensors", "model-00162-of-000163.safetensors", "model-00163-of-000163.safetensors", "model.safetensors.index.json", "modeling_deepseek.py", "tokenizer.json", "tokenizer_config.json" ]
[ 1519, 1084, 11296, 3330, 22659, 19652, 10272, 1686, 9897, 171, 5234139343, 4302383966, 4302384375, 4302349996, 4302384154, 4372073602, 4306080097, 4302384356, 4302350190, 4302383960, 4302384375, 1321583941, 4302317244, 4302384328, 4302350218, 4302383932, 4302384377, 4302350026, 4302384124, 4302384377, 4302350413, 4302384900, 4302350808, 4302384504, 4302384961, 4302350620, 4302384692, 4302384963, 4302350448, 4302384884, 4302350824, 4302384488, 4302384963, 1747417474, 4302317817, 4302384914, 4302350794, 4302384518, 4302384963, 4302350602, 4302384710, 4302384963, 4302350432, 4302384900, 4302350808, 4302384504, 4302384961, 4302350620, 4302384692, 4302384963, 4302350448, 4302384884, 4302350824, 4302384488, 4302384963, 1747417474, 4302317817, 4302384914, 4302350794, 4302384518, 4302384963, 4302350602, 4302384710, 4302384963, 4302350432, 4302384900, 4302350808, 4302384504, 4302384961, 4302350620, 4302384692, 4302384963, 4302350448, 4302384884, 4302350824, 4302384488, 4302384963, 1747417474, 4302317817, 4302384914, 4302350794, 4302384518, 4302384963, 4302350602, 4302384710, 4302384963, 4302350432, 4302384900, 4302350808, 4302384504, 4302384961, 4302350620, 4302384692, 4302384963, 4302350448, 4302384884, 4302350824, 4302384488, 4302384963, 1747417474, 4302317817, 4302384914, 4302350794, 4302384518, 4302384963, 4302350602, 4302384710, 4302384963, 4302350432, 4302384900, 4302350808, 4302384504, 4302384961, 4302350620, 4302384692, 4302384963, 4302350448, 4302384884, 4302350824, 4302384488, 4302384963, 1747417474, 4302317817, 4302384914, 4302350794, 4302384518, 4302384963, 4302350602, 4302384710, 4302384963, 4302350432, 4302384900, 4302350808, 4302384504, 4302384961, 4302350620, 4302384692, 4302384963, 4302350448, 4302384884, 3142388798, 4302317817, 4302384914, 4302350794, 4302384518, 4302384963, 4302350602, 4302384710, 4302384963, 4302350432, 4302384900, 4302350808, 4302384504, 4302384961, 4302350620, 4302384692, 4302384963, 4302350448, 4302384884, 5230637362, 4302384321, 4302384948, 6584784447, 8898324, 75741, 7847578, 3744 ]
688,603,634,706
9e6c48c3fa6bb3e1cf684675dc02e813ca45d20f
[ "transformers", "safetensors", "deepseek_v3", "text-generation", "conversational", "custom_code", "arxiv:2412.19437", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "fp8", "region:us" ]
null
# DeepSeek-V3.1 <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" /> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;"> <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Introduction DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. Compared to the previous version, this upgrade brings improvements in multiple aspects: - **Hybrid thinking mode**: One model supports both thinking mode and non-thinking mode by changing the chat template. - **Smarter tool calling**: Through post-training optimization, the model's performance in tool usage and agent tasks has significantly improved. - **Higher thinking efficiency**: DeepSeek-V3.1-Think achieves comparable answer quality to DeepSeek-R1-0528, while responding more quickly. DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the **UE8M0 FP8 scale data format on both model weights and activations** to ensure compatibility with microscaling data formats. Please refer to [DeepGEMM](https://github.com/deepseek-ai/DeepGEMM) for more details. ## Model Downloads <div align="center"> | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** | | :------------: | :------------: | :------------: | :------------: | :------------: | | DeepSeek-V3.1-Base | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1-Base) | | DeepSeek-V3.1 | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1) | </div> ## Chat Template The details of our chat template is described in `tokenizer_config.json` and `assets/chat_template.jinja`. Here is a brief description. ### Non-Thinking #### First-Turn Prefix: `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>` With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token `</think>`. #### Multi-Turn Context: `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>` Prefix: `<|User|>{query}<|Assistant|></think>` By concatenating the context and the prefix, we obtain the correct prompt for the query. ### Thinking #### First-Turn Prefix: `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|><think>` The prefix of thinking mode is similar to DeepSeek-R1. #### Multi-Turn Context: `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>` Prefix: `<|User|>{query}<|Assistant|><think>` The multi-turn template is the same with non-thinking multi-turn chat template. It means the thinking token in the last turn will be dropped but the `</think>` is retained in every turn of context. ### ToolCall Toolcall is supported in non-thinking mode. The format is: `<|begin▁of▁sentence|>{system prompt}\n\n{tool_description}<|User|>{query}<|Assistant|></think>` where the tool_description is ``` ## Tools You have access to the following tools: ### {tool_name1} Description: {description} Parameters: {json.dumps(parameters)} IMPORTANT: ALWAYS adhere to this exact format for tool use: <|tool▁calls▁begin|><|tool▁call▁begin|>tool_call_name<|tool▁sep|>tool_call_arguments<|tool▁call▁end|>{additional_tool_calls}<|tool▁calls▁end|> Where: - `tool_call_name` must be an exact match to one of the available tools - `tool_call_arguments` must be valid JSON that strictly follows the tool's Parameters Schema - For multiple tool calls, chain them directly without separators or spaces ``` ### Code-Agent We support various code agent frameworks. Please refer to the above toolcall format to create your own code agents. An example is shown in `assets/code_agent_trajectory.html`. ### Search-Agent We design a specific format for searching toolcall in thinking mode, to support search agent. For complex questions that require accessing external or up-to-date information, DeepSeek-V3.1 can leverage a user-provided search tool through a multi-turn tool-calling process. Please refer to the `assets/search_tool_trajectory.html` and `assets/search_python_tool_trajectory.html` for the detailed template. ## Evaluation | Category | Benchmark (Metric) | DeepSeek V3.1-NonThinking | DeepSeek V3 0324 | DeepSeek V3.1-Thinking | DeepSeek R1 0528 |----------|----------------------------------|-----------------|---|---|---| | General | | | MMLU-Redux (EM) | 91.8 | 90.5 | 93.7 | 93.4 | | MMLU-Pro (EM) | 83.7 | 81.2 | 84.8 | 85.0 | | GPQA-Diamond (Pass@1) | 74.9 | 68.4 | 80.1 | 81.0 | | Humanity's Last Exam (Pass@1) | - | - | 15.9 | 17.7 |Search Agent| | | BrowseComp | - | - | 30.0 | 8.9 | | BrowseComp_zh | - | - | 49.2 | 35.7 | | Humanity's Last Exam (Python + Search) |- | - | 29.8 | 24.8 | | SimpleQA | - | - | 93.4 | 92.3 | Code | | | LiveCodeBench (2408-2505) (Pass@1) | 56.4 | 43.0 | 74.8 | 73.3 | | Codeforces-Div1 (Rating) | - | - | 2091 | 1930 | | Aider-Polyglot (Acc.) | 68.4 | 55.1 | 76.3 | 71.6 | Code Agent| | | SWE Verified (Agent mode) | 66.0 | 45.4 | - | 44.6 | | SWE-bench Multilingual (Agent mode) | 54.5 | 29.3 | - | 30.5 | | Terminal-bench (Terminus 1 framework) | 31.3 | 13.3 | - | 5.7 | Math | | | AIME 2024 (Pass@1) | 66.3 | 59.4 | 93.1 | 91.4 | | AIME 2025 (Pass@1) | 49.8 | 51.3 | 88.4 | 87.5 | | HMMT 2025 (Pass@1) | 33.5 | 29.2 | 84.2 | 79.4 | Note: - Search agents are evaluated with our internal search framework, which uses a commercial search API + webpage filter + 128K context window. Seach agent results of R1-0528 are evaluated with a pre-defined workflow. - SWE-bench is evaluated with our internal code agent framework. - HLE is evaluated with the text-only subset. ### Usage Example ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3.1") messages = [ {"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Who are you?"}, {"role": "assistant", "content": "<think>Hmm</think>I am DeepSeek"}, {"role": "user", "content": "1+1=?"} ] tokenizer.apply_chat_template(messages, tokenize=False, thinking=True, add_generation_prompt=True) # '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|><think>' tokenizer.apply_chat_template(messages, tokenize=False, thinking=False, add_generation_prompt=True) # '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|></think>' ``` ## How to Run Locally The model structure of DeepSeek-V3.1 is the same as DeepSeek-V3. Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running this model locally. **Usage Recommendations:** 1. **The `mlp.gate.e_score_correction_bias `parameters should be loaded and computed in FP32 precision.** 2. **Ensure that FP8 model weights and activations are formatted using the UE8M0 scale format.** ## License This repository and the model weights are licensed under the [MIT License](LICENSE). ## Citation ``` @misc{deepseekai2024deepseekv3technicalreport, title={DeepSeek-V3 Technical Report}, author={DeepSeek-AI}, year={2024}, eprint={2412.19437}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.19437}, } ``` ## Contact If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).
[ "enzostvs/deepsite", "umint/ai", "ReallyFloppyPenguin/DeepSeek-V3.1-Superintell", "nazdridoy/inferoxy-hub", "Humbl3m33/deepseek-ai-DeepSeek-V3.1", "umint/o4-mini", "Xavernox/Orionixlabs-ai-DeepSeek-V3.1", "KhushParikh/deepseek-ai-DeepSeek-V3.1", "birde2003/for4-ai-Seek-V3.1", "HgThazh/chat", "yz-029/v3", "jernish-10/deepseek-ai-DeepSeek-V3.1", "hamhuhhg/deepseek-ai-DeepSeek-V3.1", "Tradewithchantel/deepseek-ai-DeepSeek-V3.1", "umint/deepseek-ai-DeepSeek-V3.1", "CodeHubb/DeepSeek-V3.1", "Owen-arch/deepseek-ai-DeepSeek-V3.1", "Xavernox/DeepSeek-V3.1", "DarkGman/deepseek-ai-DeepSeek-V3.1", "noamanemal/deepseek-ai-DeepSeek-V3.1", "MoShow/deepseek-ai-DeepSeek-V3.1", "availableenot/deepseek-ai-DeepSeek-V3.1", "Mindhole0/Hole_EN", "xb1698/deepseek-ai-DeepSeek-V3.1", "ReySajju742/Urdu-DeepSeek", "aa124aqdf/deepseek-ai-DeepSeek-V3.1", "mgbam/yeye", "mariusjabami/marius", "markazarshy/deepseek-ai-DeepSeek-V3.1", "BAKAI78/deepseek-ai-DeepSeek-V3.1", "sandylolpotty/document_ai", "danvilvora/deepseek-ai-DeepSeek-V3.1", "ALIG1234/deepseek-ai-DeepSeek-V3.1", "Vitaly-Vyurkov/deepseek-ai-DeepSeek-V3.1", "Usoft/deepseek-ai-DeepSeek-V3.1", "cngsm/deepsite", "adinaththosar/AiChatBot", "Udayxyz/deepseek-ai-DeepSeek-V3.1", "umint/gpt-4.1-nano", "umint/o3", "or1-gary/ee", "thinhvo96/deepseek-ai-DeepSeek-V3.1.0", "ab64/deepseek-ai-DeepSeek-V3.1", "Gu70z/Vioxx", "hsisopqqq/gpt-oss-120b", "akhaliq/deepseek-ai-DeepSeek-V3.1", "saraivaai/criadordesite", "Ai-Bharti/deepsite_3", "Ai-Bharti/deepsite_Ai3", "yzbh007/deepseek-ai-DeepSeek-V3.1", "ColaMachines1/deepseek-ai-DeepSeek-V3.1", "Nasre123/newproject", "hlmaha/deepseek-ai-DeepSeek-V3.1" ]
[ "mit" ]
null
null
684,531,386,000
null
[ "text-generation" ]
null
[ "DeepseekV3ForCausalLM", "deepseek_v3", "AutoModelForCausalLM" ]
[ "text" ]
[ "text" ]
[ "text" ]
free
company
[ "China" ]
null
null
null
null
null
null
null
null
null
68913539bd3d0a833438591d
openai/gpt-oss-20b
openai
null
8,811,370
8,811,370
False
2025-08-04T22:33:29
2025-08-26T17:25:47
transformers
3,342
126
null
text-generation
{"parameters": {"BF16": 1804459584, "U8": 19707494400}, "total": 21511953984}
[ ".gitattributes", "LICENSE", "README.md", "USAGE_POLICY", "chat_template.jinja", "config.json", "generation_config.json", "metal/model.bin", "model-00000-of-00002.safetensors", "model-00001-of-00002.safetensors", "model-00002-of-00002.safetensors", "model.safetensors.index.json", "original/config.json", "original/dtypes.json", "original/model.safetensors", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json" ]
[ 1570, 11357, 7095, 200, 16738, 1806, 177, 13750886400, 4792272488, 4798702184, 4170342232, 36355, 376, 13082, 13761300984, 98, 27868174, 4200 ]
41,301,465,516
6cee5e81ee83917806bbde320786a8fb61efebee
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "arxiv:2508.10925", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
null
<p align="center"> <img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-20b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-20b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-20b ollama pull gpt-oss:20b ollama run gpt-oss:20b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-20b lms get openai/gpt-oss-20b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-20b huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node. # Citation ```bibtex @misc{openai2025gptoss120bgptoss20bmodel, title={gpt-oss-120b & gpt-oss-20b Model Card}, author={OpenAI}, year={2025}, eprint={2508.10925}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.10925}, } ```
[ "umint/ai", "ArthT/openai-gpt-oss-20b", "MGZON/mgzon-app", "SustainabilityLabIITGN/VayuChat", "merterbak/gpt-oss-20b-demo", "fastrtc/talk-to-oai-gpt-oss-20b", "fdaudens/gpt-oss-news-agent", "mAI-models/m-4.0", "Kunal444/KunalGPT", "Paulwalker4884/Nursa", "DESTINY21/mychabot", "Ansjsn/litert-community-Gemma3-1B-IT", "Arphd4/ARK.AI", "laopaoer/ai", "soi147/writing", "nazdridoy/inferoxy-hub", "Humbl3m33/openai-gpt-oss-20b", "Shriyani/PDF-based_RAG_chatbot", "sillyfox/plegle", "umint/o4-mini", "annie416/space", "dayvon123/Daycreate", "Pagi66/linkedin_agent", "Esteban37000/Arkad", "openfree/OpenAI-gpt-oss", "gradio-templates/chatbot", "hassanalikhalid/chatbot", "FassikaF/First_agent_template", "boettiger-lab/ca-30x30-cbn", "boettiger-lab/preview-ca-30x30-cbn", "DocSA/pizza-chatbot", "tlogandesigns/fair-housing-gaurdrail", "RaulGuo1/ttt1", "emasoumipour/hamllm", "ahmedatk/resume-analyzer-template", "salmankhanpm/Telugu_Vocab_Evaluation", "bwilkie/Final_Assignment_Template3", "Paulwalker4884/christopher", "utopia777/bio", "ymali/bipolar", "ysharma/gradio.chat.app-HFIPs", "shradha0806/MyNewChatApp", "vinceomondi/openai-gpt-oss-20b", "Tonic/openai-gpt-oss-20b", "abhilash88/openai-gpt-oss-20b", "jlcruz122/openai-gpt-oss-20b", "sadsawq/Flower", "Clock070303/openai-gpt-oss-20bODIN", "bashiraziz/openai-gpt-oss-20b", "Tonic/gpt-oss-20b-mutlilingual-reasoning", "ArthT/openai-gpt-oss-20b-0din", "ManishThota/gpt-oss-20b", "TintinWu2025/openai-gpt-oss-20b", "ev032000/gpt-test2", "ev032000/gpttest3", "fdsjkhfdjksfhnkldsfjos/openai-gpt-oss-20b", "madurc29/test-oss", "SiddhJagani/openai-gpt-oss-20b", "SiddhJagani/Jwero-Internal", "elWaiEle/LunitaGlitch", "JOAOGT/JGT_GPT_OSS_20B", "AbhishekAtPT/openai-gpt-oss-20b", "vishaljoshi24/trl-4-dnd", "bagustyo/GPT-OSS-20B-Bagus", "roshiai/openai-gpt-oss-20b", "cntrlx/testOSS", "Kushi-63/hehe", "VIDraft/gpt-oss-RAG", "nandhu-nr/openai-gpt-oss-20b-deploy", "ginigen/gpt-oss-RAG", "ReallyFloppyPenguin/openai-gpt-oss-20b", "toapt989/chatbot-nguyen-cong-toa-1", "Daksh-verse/ChatBot", "Semnykcz/openai-gpt-oss-20b", "Gabtheone/openai-gpt-oss-20b", "M-Willie/openai-gpt-oss-20b", "Duongg16/openai-gpt-oss-20b", "inxz094380/openai-gpt-oss-20b", "Chi12/openai-gpt-oss-20b", "MakeAnIque/gpt-oss-base", "PercivalFletcher/Shreyansh-HackRx", "Ygc111/gpt-oss-api", "yomag2/openai-gpt-oss-20b", "mydigitalsoluces/openai-gpt-oss-20b", "doropiza/gpt-oss-20b", "laisiuin/openai-gpt-oss-20b", "Ugottaloveit/openai-gpt-oss-20b", "rocky1410/oss", "TwistedMixxMusic/openai-gpt-oss-20b", "ervijayraghuwanshi/openai-gpt-oss-20b", "songhaifeng6/openai-gpt-oss-20b", "karmaittech/openai-gpt-oss-20b_without_signin", "DavidRuizRodr/AskDB", "Bensterne/openai-gpt-oss-20b", "sitikeykarmes/hackrx-document-query", "MohamedFahim/openai-gpt-oss-20b", "nbdat92/openai-gpt-oss-20b", "karthik711/Resilient_Rag", "freddyaboulton/gpt-oss-tokenizer-playground", "tertec/openai-gpt-oss-20b", "VatsalPatel18/certificate_generator_agent", "Dnitro/DocuSense_AI", "xarical/gpt-oss-20b-demo", "Monster/gpt-oss-20b", "karisnxa/openai-gpt-oss-20b", "mmabrouk88/openai-gpt-oss-20b", "Masuhaibani/DIM-AI", "mmaleki92/openai-gpt-oss-20b", "theguyfrooks/openai-gpt-oss-20b", "abdull4h/Phishing-Detective-Academy", "AbstractPhil/GPT-OSS-20B-Mirel", "Tonic/med-gpt-oss-20b-demo", "mileslilly/openai-gpt-oss-20b", "ethanwinters1907/openai-gpt-oss-20b", "Tonic/SmolFactory", "huynm/openai-gpt-oss-20b", "Maliere/openai-gpt-oss-20b", "abdull4h/soc-llm-assistant", "aleksol/openai-gpt-oss-20b", "DjornIronshield/DnD_Chatbot_v1", "wwjph2018/openai-gpt-oss-20b", "freddyaboulton/openai-gpt-oss-20b", "Uezurii/collabhaven-ai-20b-train", "utopia777/x-thread-analyzer", "Rsnarsna/openai-gpt-oss-120b", "AKV24/GPT", "paimonx/Groq_AI_gradio", "Navjeet07/openai-gpt-oss-20b", "namberino/mcq-gen-docker", "fdaudens/gpt-oss-agent-cookbook", "mmargg/AI_Chatbot", "group011/Capstone_Project3", "Eclipsewastaken/HealthSevaTextBackend", "akhaliq/gradio-chatbot-gpt-oss-20b", "Mahendra-AI/deploy_Chatgpt", "romangoapp/gpt-n8n", "kalhdrawi/gpt-oss-20b", "robertovicario/Chatbot", "Subnaut5482/openai-gpt-oss-20b", "Kushal1311/losser-linkedin-automation", "leeroy-jankins/Poppy", "Satyapusuluri/openai-gpt-oss-20b", "blazingbunny/rahulnyk_knowledge_graph", "nepersonaj/openai-gpt-oss-20b", "aimoneyclub/openai-gpt-oss-20b", "hoangkha1810/gpt-oss-RAG-CyberSoft", "ChiragPanchal020/AnalyzerGPT", "rohit-97535470279/openai-gpt-oss-20b", "trongld/Final_Assignment_Template", "factorst/NMFL", "logan201194/OOSO3", "royaldex696/openai-gpt-oss-20b", "silcer/openai-gpt-oss-20b", "Taplah/openai-gpt-oss-20b", "rdiyali/rd-trade", "TraderXpatfx/openai-gpt-oss-20b", "sik247/lexpt", "mxmdfz05/openai-gpt-oss-20b", "tharunk1/openai-gpt-oss-20b", "thinkingpal/prompting-hero", "aitsvet/meetpad", "Dnitro/DocuScanner", "tchung1970/openai-gpt-oss-20b", "tchung1970/openai-gpt-oss-20b-ko", "wegetix250/openai-gpt-oss-20b", "mainwhihoon/career_conv", "root007x/AI-agent", "SerotoninRonin/Quiz-Generator", "peter-cooper/openai-gpt-oss-20b", "niranjanpc/TextGeneration", "IKECHUKWUOTIS/openai-gpt-oss-20b", "ayasindemir/openai-gpt-oss-20b", "RickyTTT/NewsSpace", "23f3004315/data-analyst-agent", "McLoviniTtt/AgentTideDemo", "prism-initiative/deater-medical-rag", "renaissance2005/airm-guidelines", "mistermunib/abc", "Hosdroid/CAO", "santoshshrestha/career_conversation_chatbot", "wuhuizgptamd/ai", "AbrahamKlb/youtube-rag-project", "gobeldan/openai-gpt-oss-20b", "Marina2612827/MarinaRubi", "freddyaboulton/new-chatbot", "taibitfd/coco", "Sedwekk/openai-gpt-oss-20b", "Preethamsampath/Career_Conversations", "Preethamsampath/Career_Conversations2", "Adityaraje2001/Contract-RAG-Assistant", "DocSA/pizza", "melihTAl/llm_project", "compmeist/EXAONE_4_32B_test1", "hashvibe007/gemma3-270m-med", "yahyaiuu7/TechVillage-educationalplatform-ai", "Bill68/Bill-x", "sid0608/myFirstAgent", "ilyaam5t/ryhujrsxfuy", "dreamnato/multimodal", "My-Programer-future/deepseek-ai-DeepSeek-Coder-V2-Lite-Instruct", "agosh/ellax", "Fousseyni/openai-gpt-oss-120b", "Payel2619/mental_health_chatbot", "Dantegabs/mistralai-Mistral-7B-Instruct-v0.2", "Isiezra/Ezranet", "Nasiruabdullahi/SmartTechGuide", "Nhlero07/NM", "Payel2619/Nexora", "BASHGPT/BASHv2", "Nhlero07/Nhlero", "Nasiruabdullahi/NasiruAI", "Nasiruabdullahi/LearntechwithNasiruAI", "Vichuge/test_1", "FarisGabbyB/Gabnice-AI", "hari7261/SonicChatBot", "MMOON/IFSACTIONPLAN", "Kokayiii/consuelo-main", "joyjitdas/legal", "Skylorjustine/Text_Summarizer", "siyah1/preconsultai", "wardydev/toolify-tested-v2", "BlastVMG/relationship-detector", "SombreroCat/Sombi", "BlastVMG/tthn", "ganesh-dhumal/openai-gpt-oss-20b", "ashik730/Sane-Bot", "JerryYou/mcp-demo", "SombreroCat/Chatty", "AIDAS-Lab/Test", "CBT01/CBT03", "binhan2996/BinhanAI", "trunghieuma22/finetune", "Eslammahmod1981/Prof-Hadeed", "Amourman/KRUS-chatbot", "thanhle53/Push_Calling", "DoTrongDat/ttmqh", "jgnjWIOQE833/MyAgent", "Aizaz0205/Qwen-Qwen-Image-Edit", "Mus1221/dsf", "RagulArulanandam/Cassava-Assistant", "Dania2687/Qwen-Qwen3-Coder-480B-A35B-Instruct", "AbhayVG/VayuChat2", "SustainabilityLabIITGN/VayuChatv2", "juliusNice/ds3.1", "Mu7vmeed/MY_AI_GAME", "honzanetolicka/openai-gpt-oss-20b", "0xtimi/medrag-gradio", "autogenlabs/ai", "asmr12/Qwen-Qwen-Image-Edit", "jxrlin/nutri-llama3", "shumettsion/Naive-RAG-Chatbot", "IsraelSunday/openai-gpt-oss-20b", "aroojfatima998420/MYchatbot", "hielcra/deepseekv3.1", "aimodels3233/ai_app", "Freelancer-Baba/CHAT_BOT", "SanJes/interprepai", "KitiShow0/HuggingFaceTB-SmolLM3-3B", "richard16/job-recommendation-chatbot", "ubuntufan/meta_llama", "ubuntufan/ufundi", "StewartLabs/HC", "Wael1911/Ssd", "hdamghanian/new-test", "Myoosh33/palestine_chat_bot", "jfmcs20/Test", "matheens/TestingSpace1", "My-Programer-future/Yosef", "subhashnagaraju/demo-app", "LORDA1998/openai-gpt-oss-120b", "Amaralbakar/lava_ai", "johncuke/awang-thinking", "sorxors/bearbot", "Junusibi/Asistente_ESG", "AlishbahBashir/my-space", "szk2024/pypi", "syempuna/dev", "Sngricky/openai-gpt-oss-120b", "Tariqbillal/MarineGPT", "KryptonicJaze/Kryptic_Bible_Bot_2", "AnonyAuthor/Dangling_Pypi_Demo", "MOIN666/STUDY-AI-Helper", "Melissa13/mein-chatbot-demo", "Nioooor/cspc-conversational-agent", "sahman/bakanas", "Rud73/deepseek-ai-DeepSeek-V3.1-Base", "WNT3D/Qwen3-8B-abliterated-v1", "fkndvn/study_a_level", "stuartrcole/Docs", "rabin588/my-argilla", "iko-01/MAROCAI", "TexasTLM2281/AstroCoach", "ishaanchadha/aaditya-Llama3-OpenBioLLM-70B", "MsXploiter/MsTeam", "Scibuddyclasss9AI/Scibuddy9", "BuildingAjay/SFDFV", "rafooo/monfy", "briandean8/career_conversation", "laloadrianmorales/deepseek-ai-DeepSeek-V3.1", "RejoyRejoy/lab02", "Kishore1983/tokyo", "Sofan24/Japanese", "Siddu01/Movie_recommendation_system", "anas8660/firstprj", "kabeeromr142/TEZZA", "rrr777b/new-space", "Makaii/Makaiix", "aj9028/test", "Hardingstone/harding-stones-ai", "Headner/heady", "keithrodney/keith_test", "aravsaxena884/trueRAG", "scienc/fin", "SigmaEmpresaoficial/SigmaA1", "mssaidat/tryinghard", "Qasimhassan65/Giki-Chatbot", "rahayadav/gail-gas-chatbot-Final", "RoberSegond/telegram-bot-ia", "LucidMinds3ye/EQL", "Zhang-Bo-Xiang/my-ai-app", "Thalysdossantoscruz/RZ_PLAY", "Ahaan1/Kk", "Shan12861515/Shan", "Sachinkm180/Gemma-test", "keshavnp/aiml_healthcare", "Abhi4522788/Project_Sapiens", "LeroyDyer/LCARS", "oldman1216/chatbot", "panagiotagrosd/bot", "Shaleen123/ThoughtSwitch-V1", "chazuoban6666/chazuobbbb", "sununy/Qwen-Qwen-Image-Edit", "Saad381/PixaV1", "X-96/Qwen-Qwen-Image-Edit", "Valmeria/test-space", "Harshit2804/GenAI-Chatbot", "Offlineee/Pix2Motion", "wwalker28/deepseek-ai-DeepSeek-R1", "xuliang22233/huihui-ai-DeepSeek-R1-Distill-Qwen-32B-abliterated", "chsgksdn/text-classification", "srusanth/fake-news-detector-ai", "mathiaseggert/myWIPO", "buddiezweb3/openai-gpt-oss-20b", "monishnigam/moniniga", "mistrmintr/openai-gpt-oss-20b", "Scibuddyclasss9AI/Nexora", "GDKESHAV/gpt2", "Sofa293/ResilienceLLM", "kallilikhitha123/name-matching-test", "hinosh/claude300000000000000000000000000000", "ThaoVyUwU/555", "DGHOST355/prompthack", "Yassin33/suggest_menu", "Kiko304/MetaAI", "coldbak/Tabela", "Anaconda024/UCC_Ai_V2", "devanshsumam/AnshAI30", "Kiko304/IAMeta", "aiengineerajay/chatbot", "kd3756962/Chatbot_with_Ollama", "usmana/rice-disease-detection", "y2ksa/CR7", "Fxxhem/mufti", "y2ksa/Huh", "jatobraz/shopee", "ajprtest/meta-llama-Llama-3.2-11B-Vision-Instruct", "MukeshHV/DemoAIPro", "valedelledonne/spaz", "Subthedev/IgniteX", "Boluwatifeojo81110/Boluwatifeojo81110", "Fred808/INV", "Futuresony/WhatsApp_bot", "taha454/AidMateLLM", "AlejandroSalgueroT/Prueba", "BlmHarun/BmAI", "deepika-11/founder-assiatant", "mssaidat/imapro", "WNT3D/zkCrim", "johndoser97/new1", "highlimitdesigns/black-forest-labs-FLUX.1-Krea-dev", "Gueve/AI_GUEVE", "MicaMicaella/Roseria", "oofman/gradiochatbot", "pon15018/AMI", "Motazshx9/Motaz", "Poorajith/MintITS", "alistermarc/resume_chatbot", "Shreyasbalakrishna/Qwen-Qwen-Image-Edit", "ninja0011/Qwen-Qwen3-Coder-30B-A3B-Instruct", "Vikramma2727/openai-gpt-oss-20b_Vik", "emrsvnc01/my-llm-chatbot", "Ramrojith21/ai-dm-chatbot", "Kabirahmed81500/Jarvis-AI", "Wyatthatoffgriff654/openai-gpt-oss-20b", "Sandronelo/TaskDevelopment", "h19overflow/Self_learning", "h19overflow/selflearning", "Bbrfffgg/Steve-mini-chatbot", "shehrozmahr/sleep-stress-assistant", "sakshi2v2/GramVikasAI", "rebecax/dreptedu-ai", "allinoneeee/NousResearch-Hermes-3-Llama-3.1-8B", "Mohamedarcham/my-chatbot", "CodeHubb/openai-gpt-oss-20b", "VRCKT/space", "keerthyb/image-analysis-chatbot", "Edwin168/Spaces", "shehrozrashid52/Astro_Expert", "matrowy/avatar_tts", "GaYan23/Deep", "ZIONLOW/MY-AI-BOT", "pendrag/unia", "BONDRT/chatbotg", "BONDRT/chatbotog", "Rizki-firman/openai-gpt-oss-120b", "rj-ai-ind/llm_demo", "PulkitSahu/gpt-oss-reviewer", "parma79/nlp", "AiGarden/ai-tools-bot", "c3lpo/loo", "Melveen/Kibos", "OffThisDay/gpt-oss-20b-demo8", "Santhoshkumar199/openai-gpt-oss-20b1", "svsbandi/POML", "AiGarden/ai-garden", "Mutasim100/encryption-expert-chat", "Nnhhs3/Llama-ai", "Ronny12345-art/MR-GPT", "eslis/YTKMedia", "ewebspace/virtualsentence", "lbaleca/gg", "lbaleca/openai-gpt-oss-20b", "Rashmith245/SR-chatbot", "Vallio5o9/foundation-volunteer-chat", "knija17/EEE-AI", "tommyjis/mY-AI", "Zx444/KantaiNguta", "atharvbangle/Hackathon", "Ghost2513/openai-gpt-oss-20b", "Srinidhoni/Repo", "SHAURYAAA007/shaxx", "SHAURYAAA007/shaxxxxzz", "praneethR02/Detox", "BalaRahul/BalaRahul", "BalaRahul/rahul", "FahadKHanb56/SearchEngineLLM", "BLACK-TOES/AI-CHAT-BOT", "Aidenox/PygmalionAI-Pygmalion-3-12B", "jblast94/voice-ageny-liuve", "mAI-models/m-4.5_Pro", "e7245746/my-shakespeare-writer", "mAI-models/m-4.9_Plus", "VikaasN/telugu-chatbot", "Unclehoody58/Hiwbw", "anhducmata/baybee", "UmauvonStrietz/RadioUKWplus", "Thisisthisshsuis/YtGpt", "luuminhnhat/NousResearch-Hermes-3-Llama-3.1-405B", "anoop74rawat/Family_Response", "MMedia1/Qwen-Qwen-Image-Edit", "mAI-models/m-4.3-mini", "mAI-models/m-4.7o", "Omar123321/aitest", "TamaraLillian/chat-bot", "Indrajit009/Python_boy", "ruman1114/work", "Salman-Ahmad1122/adv-chatbot", "kos9/kos", "swangi/rag_vs_ft", "ARMudassir/hospital", "Dev-Vaish/WanderMind-AI", "mAI-models/m-DeepThinker-4", "mAI-models/m-DeepThinker-4.3-mini", "Imunlucky/Ohhyeah", "theallegro/chaka", "mohammedben/hamid", "adarshbaddies/aboutme-ai", "Blisk0/Agentic-RAG", "bischoff555/openai-gpt-oss-120b", "tdpp/Chat", "FeatureFinder/RAG-Chatbot", "pesquisasemia/Test", "carlosrodt/Blackspine9", "rjfresh988/3.1", "sultan-123/q4a-instruments", "ycherni/YOTALK", "parotelli/g", "illenluna/MeAgent", "rjfresh988/v", "illenluna/IllenAgent", "slinkybuky/BeanGPT", "oromero270/proftoak", "ch4v4/freelance", "Yumita-11/chatbot", "JonathanAKJ/JAKJ", "Jobfindr/AI", "bhumiboinwad/Career_guide_2.0", "Ramrojith21/Digital-Marketing-AI-Chatbot", "themayurjha/transcribe", "KJThe1/theonlyone", "Dablu123/Pichat", "Dablu123/Pi_chat", "asshat1981ar/Qwen-Qwen3-235B-A22B-Thinking-2507", "Rajkumarxx/Tiger", "RahulPraneshB/tiger", "Vignesh1399/AI_ChatBot", "Melvin2025/RedDragon", "Nijasparveen/H", "Neelkanani/ta-demo-bot", "venkat2000/AIChatBox", "troubledmonkey/Edvoice-agent", "taha5440/Chatbot1", "shazsabir/chatbot", "shazsabir/openai-gpt-oss-120b", "TSM7/chatfrench", "LearneratVnit/Lab_Assistant", "mdhossainbhuyain/student_wellness", "cubicalbrush453/Blake_ai", "AbuEl3mayer/LinesAlldashdoard", "AKKU07/manu", "EdgarDataScientist/Client_Management_Agent", "hsisopqqq/Serbisyo_PH", "nicobetancourt/nico_space_test", "avinashsidhu/AItutorapp", "DESTINY21/destiny", "Aymendn80/YouCanAI", "satyasri77/cfa-level1-bot", "nikittytu/Ai_consultant7", "lalkalol1907/oss-20b", "Gotsface/antiq", "kulsaurabh/delf-a1-chatbot", "nvsngurram/cai-group123-assignment", "umair112211/sleepdeprived3-Christian-Bible-Expert-v2.0-12B", "Jenzie/MindCare-AI", "IPEC-COMMUNITY/EO-Robotics", "Ariya814/Ariyalabs", "Byakk1/Byakkis_Zone", "pyjilic/aigo", "as8820141as/cjj", "Solez-Ai/KovexRoast", "bhumiboinwad/gradai", "zhnzeze/tream", "foreverwanghe/fire", "Sahil5112/Gohhg", "anderson1017/anderson", "PikaDai0903/PikaDai", "hk77cn/test", "Maoyuna/openai-gpt-oss-120b", "Konoharukida/Freespace", "JasonDever/aXAI", "teddybear95/teddybear", "sunqi1359145/chatAI", "cubelover/cube", "johantw/gpt-oss-20b", "mtman1212/athena", "iammawaistariq/lightriver_RAG", "yunzhu666/zy_gpt", "thorzh/chatbot", "lwmi/aibang", "Ansjsn/Gemma", "Wuyuehua/wyh515100", "asoul007/asoul008", "Chenzheluo/S2GNN", "xiaolc/xiaochuan", "wtgkm/wtgkm.ai", "wayofloser/waytohome", "xzqi/myhome", "kavyasama/my_chatbot", "renareddy/mychatbot", "wsnbb56/Noct", "ryan0223/Space", "Ogata13/Test", "lyz168/ylk", "VINE12/aichatbot_mental_health", "xiajunyi/AI", "DuanPingDong/Kevin", "hw0715888/tc0715", "saramuse/OribeDesk", "mahesh2025AI/Copilot_chatbot", "VINE12/my_health_chat_bot", "liujiawen92/liujiawen", "DuanPingDong/openai-gpt-oss-20b", "ektaprakash/Gold-assignments", "Solez-Ai/Kovex-Roast", "ulisse1996/lodge-easy", "rcpaffenroth/DSCS_553_example_2025", "Shahzaib124/fake_friend_detector", "wwfandy/wwfandy", "durai432002/demo", "ian20040409/Space1", "dadibide/future", "dertuff/NeiroFlaut", "fl534/mistralai-Mistral-7B-Instruct-v0.2", "Obummexon01/Project_star", "ProfNicholas/JailBreaker", "leitong99/wt", "OffThisDay/gpt-oss-20b-demo9", "CallmeStrange/dialogflow-gpt-chatbot", "EmbeddingsOG/farm-chat", "SlashPack1/RAG_PDF_Assistant", "uanandu/anandu-smolagent", "AlineAps/Tutorial2508", "Nebus/Yuni", "Ridler001/Deb-AI-table", "is21/openai-gpt-oss-120b", "Cristianancona/NeoSmith", "h-song/free", "Cristianancona/mi-neosmith", "asherzad/openai-gpt-oss-120b-test", "kkvipvip/Qwen-Qwen3-4B-Instruct-2507", "jojoli/chatcat", "Gtawad/Nafsmirror", "scai2025/scai02", "benshen/mylink", "LeSanaeIncorporated/LeDemo", "fengqing11111/fengqing", "deocheng/000", "jiang1122/xiongjie", "monica516666/Baer", "liexpress/newchat", "holyguy/CloudSpaces", "wky869/UCTG", "jim11237/zeroday", "hudsaed/hudsaeed", "ProjectsAiml/Vkreact", "harryboy99/elvisstore", "god230255/aa0168", "Nghiakttv/SDK", "realleonw/leonw-space", "asgnge/asgnge", "wbgwwd/baogen", "seanmini2024/AI", "SreekanthNarendran/RegtechIndia", "fgdfggg123/123", "Luoyazhou/DEEPSEEK-AI", "yaduns/chat", "shayaan1234567/bootcamp", "kenchoy/team4x", "kapilguptapt/_carrierconversion", "SouthNax2/openai-gpt-oss-120b", "XiverMa/comptation", "Vasisthkv/chatbot", "woori12/WOORICPA3_RND", "Stodeveloper/Stospace", "joinmeng/dream", "124canay124/deneme123", "JJoshi468/JJ_Workspace", "myHerb/WebInSight", "nadakjc/nadakjc", "sdxatf/voice", "SkyStrikerAce/Airspace", "OLUDAVID/DAVID0", "jiazhizhong/recallg-aibot", "kos9/ha", "TestFZAI/Modals", "Elias-Torjani/25W35", "dertuff/FlautGPT", "OLUDAVID/DAASO", "NebulaPaw/NebulaPaw", "OLUDAVID/davido", "RaymondBeniste666/GaiaDuduDevWebAIFinance", "manitra1/rag-christian", "Mirantoss/RAGRAG", "William-the-rizzler123/LearnFast-Math-Bot", "jianyuan941/private", "Luv88/openai-gpt-oss-120b", "Luv88/gpt-oss-120b-deploy", "soupstick/advanced-fraud-analyst", "Muqadas123/LLM", "nicolasmery/steelFOUNDRY", "Kamalra007/Aadhaya", "joao123a/totola", "lili0138/free", "Gabbydamian/clare", "nicolasmery/metallurgist", "Eldarich/openai-gpt-oss-20b", "fengqinngxue/AInav", "Him-Art/tencent-HunyuanVideo", "xiexain/lab", "Inv3ntan8or/DnD_Dungeon_Master_5e", "martinezaustin078/AI-Chatbot", "guguhenriquezl4/Mcsocial", "Anujmis/Medical-chatBot", "bylang/llm-from-scratch", "chipmonktalent/arcaneselfbooking", "mgbam/yeye", "vijoyPaul/mychatbot", "OrangeApe/test-demo", "unpredictable12/App.py", "aabhishek777/personal_chatbot", "cp7665315/Remini-ai", "Jonas-Stapper/Jonas_Virtual_CV", "Jongha611/quoter_v1", "bhuvi22/ai_therapist", "havikz/ultron", "mmarczuk/robobot", "Emir1234/Reyhan", "chrisizeful/goopy-catalog-chatbot", "SpaceNinja007/Tester", "DrLLM-Unity8/Arexja78", "lordkp/Ashu-bhai-jhatu", "RazorBll/diagnostico-salud", "Pinguy1982/test", "VegaLing/VChatbot", "hhdodkd223/kff", "Marinyon/Trend-Breakout", "barton333/RayneJin", "nassarit007/Nass", "Xxpert/new-openai-gpt-oss-20b", "leonardosaverio/transcription", "leonxiao-extr/play1", "GloriaGarcia/ai", "Blackrainbow7/BlackRainbow", "learntingss/study_people", "Hosh001/Justforfun", "eagle0504/chatbot-template", "Tanyain/myspace1", "sdkrastev/CSDS553_Demo", "sdkrastev/Playground", "Erenyvl/Nextgen", "bezalellee/life-giving", "Meraalalla/nari-labs-Dia-1.6B", "ocean-zhc/demo", "Arrisntthis/anthracite-org-magnum-v2.5-12b-kto", "geumgun/gpt432525", "doggdad/mmrag-hf", "haha1230o0/test001", "tseng91301/AI-Agent", "Luv88/new-one", "Moaazsoliman/AI_Powered_Products_Search_", "ranjanphukan/chatbot-gpt-oss-20b", "dnha/2", "Rupesh1215/Multi_Model_Chatbot", "jiejie22233/chatbot", "sadsaas/asd", "rampogen/mental_health_bot", "lesliewxj995/lesliewxj", "ncalr/htyuzz", "A0ne-01/scp079v0", "zjln/yrx", "jole0102/0102030405", "fm146147/chat", "jackyang1021125/2", "A0ne-01/scp079v0.1", "A0ne-01/scp079V0.2", "krishnathota99/basic1", "Karunyaa/Mail-gen", "julioesteban1/centroajedrecisticosuperior", "sdqfg/chatbot", "pasupathyvn/test", "Bharath1707/BharathBOT", "chris2396/Jiandong", "jornee/gemma", "Tom1986/test-ai", "BAKAI78/Kurike", "Rahmatjonov/open_master_AI", "alanatgt/free16", "nayanhugging/skillswap", "SamREye/novoco-agent", "llk2why/111", "EDDY88/Skhululwe", "akumusua/AAAD", "li1ned/Test-Space", "getinkatoch/image-renamer-clip", "jraeford/BridgingTheGap", "li1ned/DS-Space", "rtkmd/qhjldz", "ttjj666/E", "BorjiginHasa/MGL02", "Vsai2004/Intelligent_NLP_Powered_Chatbot_System", "hdj555/x5", "appletree23/meta-llama-Llama-3.2-3B-Instruct", "ahahgggg/TeckAI", "trungs/gemma-chat", "yusufs2/SolaraAI", "zoeminghong/first-ai", "huiyuanlin/ai", "GauSai/AIChatBot", "M-Rajeswari/en-te-story-bot", "lparkourer10/Minemalia_AI", "vision-labs/Yolo_web_app", "Blackechortd/black-echo-support", "akshaykumarsaw/MyGenAIChatBot", "sanxiang/701", "suprim96/Parinamm", "Blackechortd/black-echo-chatbot-support", "HardikBhardwaj/AgricultureProj", "santina2809/metadata-agent-chatbot", "rcpaffenroth/inclassteste", "kshahnathwani/inclasstest", "ratneshpawar/AI-based-Image-query-system", "surfdaddy/GPT_Preview", "GVHiranMagri/IT7133ITHelpDeskChatBot", "Popoolaibrahimtayo/CryptoZilla", "pedrolgr47/oneshot", "Meim3/Yu", "kvanta-labs/meta-llama-Llama-3.1-8B-Instruct", "Vitaly-Vyurkov/test", "cupcakes323/mp3-to-photo", "katie2023may/katiemaytest", "xl393613785/chat", "Srikesh/root", "jordanxue/chatbot1", "liaolijiang/Minicpm_T", "haiyangdiao/test", "JohnnyOpenxAI/deepseek-ai-DeepSeek-R1", "CloudifyDB/CloudifyDB", "wangze/nsi", "sdkrastev/spacetest", "somgiri290314/ChatKPI", "gugapiyal/my-ollama", "cryptoxxz/sof.ia", "RoseMilktea/ragtest", "Khwalu/thanzi_bott", "haiyangdiao/test1", "SWENDEV/bigrick1", "smile-x/chatbot", "sheba6115/MetaBot", "ASSLLP/RM-Assist-Agent", "xingzhe888/chat-AI", "Lowgen/deepseek-ai-DeepSeek-V3.1", "jackalsys/Test", "maragani/streamlittemplate", "vishanth2007/ACC", "maragani/three", "pollafattah/test1", "Dewanshtripathi45/devon-ai", "KAVY-AI/Hello_ai", "Edilizia/Benito", "qqwuyucheng/c11", "Pandi732/Local", "23f3004315/pro", "alexcore1/aaa", "Rooms/14_HF_Agent_project", "alexcore1/pdf", "alexcore1/vffvv", "alexcore1/vfvfvfvddr", "alexcore1/8899", "alexcore1/ll00", "alexcore1/4553", "alexcore1/34343", "alexcore1/f34", "Mclovin9100/hub_gpt_ultra.py", "rchrdwllm/aill-be-sick", "Eason5413/ChatAI", "Branda4689/litert-community-Gemma3-1B-IT", "Tejashree1309/farming-ass", "Widmery/afrinoti-llm", "umint/gpt-4.1-nano", "umint/o3", "tejaspix/TejasPix", "SpaceNinja007/TestBot", "yogies/chat-guide", "stackway-ai/openwebui", "Elikem-Ahlijah/autorag-chat", "iniyasargam23456/mcp-sentiment", "mkmanish/DocuMind", "mrms/My-Thesis-Advisor", "Garyy21/9server", "dlego08/izipay", "or1-gary/chat", "Anujmis/AI-MEDICAL-CHATBOT", "DaRKMaN257/Mkh257", "Exosynaptelemorphic/Mnemosyne", "4o4site/aaa", "CodeWithCesar98/LOVE", "ZK07AI/ZK07AI", "lucsanscartier/Superposition", "Gbzin123/Gbzin123", "Abass247/Imohnews", "Mirosoft/chatpko", "AkikoHanai/AI_behavior", "kvanta-labs/pubmedbert-base-embeddings", "rotateX/RotateX-Genie", "sprnt/lls", "dmitry1219/ProsusAI-finbert", "syedzakiya/Med-gemmaAI", "mathewjm/openai-gpt-oss-120b", "vineela231/RAG-QA-CHATBOT", "msmokov/minima", "Meesaw/AIma", "RrandomuUSser/test", "Illumotion/3", "SnehaLeela/career-chatbot", "ammumadhu/url_classifier", "Olskard/olskard-distilgpt-demo", "Radeonn123/RoBERTa_Sentiment_Analysis", "Aqeel34/Adventureman", "soumyasingha01/conversational-rag-pdf", "arvinddava/aravind_reval", "wasdqqawa/Qwen-Qwen3-Coder-30B-A3B-Instruct", "JackJ2322/fast-ai-lesson2", "Muddser/AI-Chatbot-Muddser", "balarajuyamini/TIRUPATI_GUIDE", "Ayazpanda65/Ayan", "Saivivek25/data", "melancholic-ksm/gemma3_270M", "sivalisct/orcaid-s3-1b", "bugraalptegin/test", "Emin4ik/chatllm", "yarenty/Chat_tester", "herry90/bitnet", "will7i7am7/elina-ai-chat", "unileon-robotics/Trasgu-Space", "dyyff/dyyff", "Tsaivbcknvbj/TSAI", "chabdulbasit989/Grafino_GPT", "xXrazor1234/Test_AI_App", "felik/nopkie", "dylanwhiggins27/Aboutus", "Engdawood/ALLaM-AI", "123vanshika/homework-helper-ai", "Bharani555/Image_classify", "msmokov/mit", "hhyykk/DK_bingdd", "qspacecorp/cfrsdzvf", "TechEnjoyer2006/Musasi_Model", "kansari2512/query_documents", "Wqndyl/Waelbenkandil", "ianchan963/Caeno", "BoomikaE/brain-tumor-detector", "johnnyrong/MySpace", "Pezjm/tgbot-ai", "khaju/jesustheway", "bonfire479/x", "dhaarmi/Summarizer", "Ogiebrooks/CHATBOT-AI", "wahidwahido/Kyle", "Fgracia22/dimssengpt", "Gourav31ite/chatbot-india", "yahyaiuu7/TechVillage-educationalplatform-exam-chatbot-ai", "Interste11ar/testing", "ravulavishalreddy99/chatbot", "1arshadshaikh/PraetorAI", "Therealtomfitz/Test", "freew44/kgjujhgf", "freew44/DORORO", "yhalltech/hello2", "rcpaffenroth/CSDS553_Demo", "annietayyab/MedicalChatbot", "Pagi66/linkedin-ai", "Ahsan2kk1/PdfAnswerAi", "ZeusRoby/Ralph", "Ogiebrooks/chatboot2", "SafaaAI/chat", "HussienXG/ai-agent", "asahu-synaptics/Data-Tool", "xXSalvadorAndradeXx/Modelo", "aleafknow/xiaok", "Lanexbx/Lina", "NEURODIVERGENTHELPER/deepseek-ai-DeepSeek-R1", "BONCDFGX/20250830", "11b11/DFCOC", "SaelViantra/SaelViantra", "umint/openwebui", "springming/chat", "lucsanscartier/Yas", "genetech/testing0001", "pygae/o6-predictor", "sheikahamed12/career_conversations", "bhavaniguni/AI-Med-Prescription", "lixiaoyaoHugging/services", "Chenaou/bot", "8uddys4nj4y/KrishiAi-demo1", "Qmellow/Qwen-Qwen-Image-Edit", "nutrition123/Zeonlife", "FallenBoy001/Who", "00han00/playground", "Vij8718/Chatbot1", "gm42/testing_tycho", "Shaban306/openai-gpt-oss-20b", "jacksonandrew/demo", "baitadem/adem", "GiangZ/MailServer", "Manish6Shetty2004/testing", "shadow168/shadow", "topsutin2121/meta-llama-Llama-3.1-8B-Instruct", "Ankit105/AnkitVirtualResume", "Debapriya16/gg", "Roboguy1/ROBO_GUY", "Corex-mode/corex-modal", "Kaushal-HerbChimney16/new-space", "jaiarora123/new-space-tester", "jaiarora123/new-space", "QiAdmin/text", "RAGE45/as", "ambynoob/Qwen-Qwen3-Coder-480B-A35B-Instruct", "Mmdv2/MmdiAi", "BalaCodes/query-classifier", "LENGRAN/ai", "rohan1529/100x-chat-UI", "kimfly/kimfly", "steinven/demo", "chandrakantnial/demo", "chandu33raja/llm", "jcrobots5123/openai-gpt-oss-120b1r1rr1r1r", "mrkhalilL/chatbotalvand", "jcrobots5123/openai-gpt-oss-120b345t345345", "jcrobots5123/deepseek-ai-DeepSeek-R1", "sudipta26889/gradio-doc", "YSB0026/rock", "mogalaman251100/gpt-oss-20-fine-tuned", "kokziv/aichat", "LangNo/test", "gaurav0506/kuro_Ai", "rehanifakram/onetry", "itzitachi/animo", "itzitachi/Animechater", "Vigneshmuthusamy/MidasProjectDetails_AI_Agent", "shashinani/arcanum2", "Techonni/Lou", "Ankit18006/ai", "RSgroup/chatbottest", "mastodonitis/mastodonitis", "lucasabner/career_conversation_lucas", "mastodonitis/hersemesitio", "ZeusRoby/openai-gpt-oss-20b", "Yash985/TestSpace", "Kamalkshs82/openai-gpt-oss-20b", "Kamalkshs82/Jioh", "Create1234/Homework_hub3000", "Sphiwe-5509AI/Spaceman", "nikzadb/ChatModelApp", "AgenticGogol/rag_space_name", "AgenticGogol/rage_deploy_new", "iamnilesh007/chatbot1.0", "GTOMA83/MeuModelo1", "hassammoin/GPT-Uncensored-PenTest", "darutto/aboutmechat", "Rafs-an09002/my-chatbot", "SouthernHaus/RelocationHelper", "Rafs-an09002/chat", "beinawsa/Mara_Translator", "toan0212/ChatReact", "RivalsOnTS/my-ai-chat", "Shibih-1202/Llama-trained-deploy", "dark145/myresume-gen", "chenyuppy/chatbot", "kesika/kesika", "flavio10/Robot", "orjiwinston1/comic", "uhybub/MYFIRSTGenAiAvatar", "Sumayukh/openai-gpt-oss-20b", "JAjajajajajajajajaj/Ja", "Nolan35/Qwen-Qwen3-Coder-30B-A3B-Instruct", "shashinani/Howgarts", "PranavReddy18/latest-poerfolio", "JD-billionaire/Legal_advisor", "ardgan/noname", "Vij8718/Trial", "jackchen999/chatbot", "ferrywuai/gradio-chatbot-test", "otrojesfahan/ai", "pulkit0101/datasetfinder-ai", "Create1234/openai-gpt-oss-20b", "Sublimity24/Fake-news-detector", "jyothika007/Jyothika-chatbox", "Jintaro5423/NsfwAi", "jagadesh31/chatbot", "rpmellow/streaming", "syakesaba/test", "sudeepchakraborty/chakraborty", "sudeepchakraborty/chak", "zizq/as", "Lucasmarsu/lusure", "pangxiang/lt" ]
[ "apache-2.0" ]
null
null
21,511,953,984
null
[ "text-generation" ]
null
[ "GptOssForCausalLM", "AutoModelForCausalLM", "gpt_oss" ]
[ "text" ]
[ "text" ]
[ "text" ]
enterprise
company
[ "United States of America" ]
null
null
null
null
null
null
null
null
null
68b159467c2485b297655f40
meituan-longcat/LongCat-Flash-Chat
meituan-longcat
null
9
9
False
2025-08-29T07:39:50
2025-08-31T09:12:12
LongCat-Flash-Chat
126
126
null
text-generation
{"parameters": {"BF16": 561730738176, "F32": 132142080}, "total": 561862880256}
[ ".gitattributes", "LICENSE", "README.md", "config.json", "configuration_longcat_flash.py", "generation_config.json", "model.safetensors.index.json", "model_00001-of-00075.safetensors", "model_00002-of-00075.safetensors", "model_00003-of-00075.safetensors", "model_00004-of-00075.safetensors", "model_00005-of-00075.safetensors", "model_00006-of-00075.safetensors", "model_00007-of-00075.safetensors", "model_00008-of-00075.safetensors", "model_00009-of-00075.safetensors", "model_00010-of-00075.safetensors", "model_00011-of-00075.safetensors", "model_00012-of-00075.safetensors", "model_00013-of-00075.safetensors", "model_00014-of-00075.safetensors", "model_00015-of-00075.safetensors", "model_00016-of-00075.safetensors", "model_00017-of-00075.safetensors", "model_00018-of-00075.safetensors", "model_00019-of-00075.safetensors", "model_00020-of-00075.safetensors", "model_00021-of-00075.safetensors", "model_00022-of-00075.safetensors", "model_00023-of-00075.safetensors", "model_00024-of-00075.safetensors", "model_00025-of-00075.safetensors", "model_00026-of-00075.safetensors", "model_00027-of-00075.safetensors", "model_00028-of-00075.safetensors", "model_00029-of-00075.safetensors", "model_00030-of-00075.safetensors", "model_00031-of-00075.safetensors", "model_00032-of-00075.safetensors", "model_00033-of-00075.safetensors", "model_00034-of-00075.safetensors", "model_00035-of-00075.safetensors", "model_00036-of-00075.safetensors", "model_00037-of-00075.safetensors", "model_00038-of-00075.safetensors", "model_00039-of-00075.safetensors", "model_00040-of-00075.safetensors", "model_00041-of-00075.safetensors", "model_00042-of-00075.safetensors", "model_00043-of-00075.safetensors", "model_00044-of-00075.safetensors", "model_00045-of-00075.safetensors", "model_00046-of-00075.safetensors", "model_00047-of-00075.safetensors", "model_00048-of-00075.safetensors", "model_00049-of-00075.safetensors", "model_00050-of-00075.safetensors", "model_00051-of-00075.safetensors", "model_00052-of-00075.safetensors", "model_00053-of-00075.safetensors", "model_00054-of-00075.safetensors", "model_00055-of-00075.safetensors", "model_00056-of-00075.safetensors", "model_00057-of-00075.safetensors", "model_00058-of-00075.safetensors", "model_00059-of-00075.safetensors", "model_00060-of-00075.safetensors", "model_00061-of-00075.safetensors", "model_00062-of-00075.safetensors", "model_00063-of-00075.safetensors", "model_00064-of-00075.safetensors", "model_00065-of-00075.safetensors", "model_00066-of-00075.safetensors", "model_00067-of-00075.safetensors", "model_00068-of-00075.safetensors", "model_00069-of-00075.safetensors", "model_00070-of-00075.safetensors", "model_00071-of-00075.safetensors", "model_00072-of-00075.safetensors", "model_00073-of-00075.safetensors", "model_00074-of-00075.safetensors", "model_00075-of-00075.safetensors", "modeling_longcat_flash.py", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json" ]
null
null
6d2d483a1112bce151bcba600d84329c40eb72dd
[ "LongCat-Flash-Chat", "safetensors", "text-generation", "transformers", "conversational", "custom_code", "license:mit", "region:us" ]
null
# LongCat-Flash-Chat <div align="center"> <img src="https://raw.githubusercontent.com/meituan-longcat/LongCat-Flash-Chat/main/figures/longcat_logo.svg" width="300" alt="LongCat Logo"/> </div> <hr> <div align="center" style="line-height: 1;"> <a href="https://longcat.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-LongCat--Flash--Chat-ADFF2F?color=29E154&logoColor=white" fill-opacity="1" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://huggingface.co/meituan-longcat" target="_blank" style="margin: 2px;"> <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-LongCat-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://github.com/meituan-longcat/LongCat-Flash-Chat/blob/main/figures/wechat_official_accounts.png" target="_blank" style="margin: 2px;"> <img alt="Wechat" src="https://img.shields.io/badge/WeChat-LongCat-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> <a href="https://x.com/Meituan_LongCat" target="_blank" style="margin: 2px;"> <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-LongCat-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/> </a> </div> <div align="center" style="line-height: 1;"> <a href="https://huggingface.co/meituan-longcat/LongCat-Flash-Chat/blob/main/LICENSE" style="margin: 2px;"> <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/> </a> </div> ## Model Introduction We introduce LongCat-Flash, a powerful and efficient language model with 560 billion total parameters, featuring an innovative Mixture-of-Experts (MoE) architecture. The model incorporates a dynamic computation mechanism that activates 18.6B∼31.3B parameters (averaging∼27B) based on contextual demands, optimizing both computational efficiency and performance. To achieve advanced training and inference efficiency, we employ a shortcut-connected architecture that expands computation-communication overlap window, achieving over 100 tokens per second (TPS) for inference cost-effectively. Our comprehensive training and scaling strategies ensure stable, efficient training, while tailored data strategies enhance model performance. Now we release LongCat-Flash-Chat, a non-thinking foundation model that delivers highly competitive performance among leading models, with exceptional strengths in agentic tasks. ### Key Features #### 🌟 Scalable Architectural Design for Computational Efficiency LongCat-Flash is designed and optimized under two key principles: efficient computation utilization, as well as efficient training and inference. Specifically, (1) As not all tokens are equal, we introduce the zero-computation experts mechanism in MoE blocks to allocate a dynamic computation budget to important tokens based on their significance, i.e., activating 18.6 to 31.3 billion parameters (out of 560 billion total) based on contextual demands. To ensure consistent computation load, we employ expert bias adjusted by a PID-controller, maintaining an average of∼27 billion activated parameters per token. (2) As communication overhead becomes a bottleneck during MoE model scaling, we incorporate the Shortcut-connected MoE (ScMoE) design to expand the computation-communication overlap window. Combined with customized infrastructure optimizations, this design enables training at a massive scale of over tens of thousands accelerators and inference with high throughput and low latency. #### 🌟 Effective Model Scaling Strategy Effectively and efficiently scaling model size remains a key challenge in strategy design. To this end, we develop a comprehensive stability-and-scaling framework for robustly training large-scale models: (1) We successfully apply a hyperparameter transfer strategy to such a large model, predicting optimal hyperparameter configurations by leveraging results from smaller proxy models with theoretical guarantees. (2) We initialize the model using a model-growth mechanism based on a refined half-scale checkpoint, achieving improved performance compared to conventional initialization methods. (3) A multi-pronged stability suite incorporates principled router-gradient balancing, a hidden z-loss to suppress massive activations, and fine-tuned optimizer configurations. (4) To enhance the reliability of large-scale cluster training, we introduce deterministic computation. This guarantees the exact reproducibility of experiments and enables the detection of SDC (Silent Data Corruption) during the training process. These interventions ensure that LongCat-Flash ’s training remains stable, with no irrecoverable loss spikes. #### 🌟 Multi-Stage Training Pipeline for Agentic Capability Through a meticulously designed pipeline, LongCat-Flash is endowed with advanced agentic behaviors. Initial efforts focus on constructing a more suitable base model for agentic post-training, where we design a two-stage pretraining data fusion strategy to concentrate reasoning-intensive domain data. During mid-training, we enhance reasoning and coding capabilities while extending the context length to 128k to meet agentic post-training requirements. Building on this advanced base model, we proceed with a multi-stage post-training. Recognizing the scarcity of high-quality, high-difficulty training problems for agentic tasks, we design a multi-agent synthesis framework that defines task difficulty across three axes, i.e., information processing, tool-set complexity, and user interaction—using specialized controllers to generate complex tasks requiring iterative reasoning and environmental interaction. For more detail, please refer to the comprehensive [***LongCat-Flash Technical Report***](https://github.com/meituan-longcat/LongCat-Flash-Chat/blob/main/tech_report.pdf). ## Evaluation Results | **Benchmark** | **DeepSeek V3.1** | **Qwen3 MoE-2507** | **Kimi-K2** | **GPT-4.1** | **Claude4 Sonnet** | **Gemini2.5 Flash** | **LongCat-Flash** | |---------------|-------------------|--------------------|-------------|-------------|--------------------|---------------------|-------------| | **Architecture** | MoE | MoE | MoE | - | - | - | MoE | | **# Total Params** | 671B | 235B | 1043B | - | - | - | 560B | | **# Activated Params** | 37B | 22B | 32B | - | - | - | 27B | | **General Domains** | | | | | | | | | MMLU<sub>(acc)</sub> | 90.96 | 90.23 | 89.86 | 89.64 | 91.75 | 86.33 | 89.71 | | MMLU-Pro<sub>(acc)</sub> | 84.45 | 84.83 | 82.06 | 81.72 | 83.74 | 81.95 | 82.68 | | ArenaHard-V2<sub>(acc)</sub> | 84.10 | 88.20 | 85.70 | 61.50 | 62.10 | 77.00 | 86.50 | | CEval<sub>(acc)</sub> | 89.21 | 92.70 | 91.26 | 79.53 | 86.63 | 78.78 | 90.44 | | CMMLU<sub>(acc)</sub> | 88.04 | 88.14 | 89.66 | 77.65 | 86.51 | 78.30 | 84.34 | | **Instruction Following** | | | | | | | | | IFEval<sub>(acc)</sub> | 86.69 | 88.54 | 88.91 | 85.58 | 88.35 | 83.92 | 89.65 | | COLLIE<sub>(acc)</sub> | 43.80 | 49.71 | 56.34 | 50.00 | 51.22 | 48.60 | 57.10 | | Meeseeks-zh<sub>(acc)</sub> | 33.83 | 35.32 | 42.79 | 41.54 | 35.07 | 34.84 | 43.03 | | **Mathematical Reasoning** | | | | | | | | | MATH500<sub>(acc)</sub> | 96.08 | 98.80 | 97.60 | 90.60 | 93.80 | 98.40 | 96.40 | | AIME24<sub>(avg@10)</sub> | 66.30* | 81.67 | 69.60* | 47.00 | 47.00 | 79.67 | 70.42 | | AIME25<sub>(avg@10)</sub> | 49.27 | 68.33 | 50.66 | 32.00 | 37.00 | 67.33 | 61.25 | | BeyondAIME<sub>(avg@10)</sub> | 36.50 | 57.60 | 36.60 | 22.10 | 20.50 | 44.20 | 43.00 | | **General Reasoning** | | | | | | | | | GPQA-diamond<sub>(acc)</sub> | 74.90* | 77.43 | 75.76 | 67.68 | 70.71 | 80.30 | 73.23 | | DROP<sub>(f1)</sub> | 84.19 | 78.57 | 89.04 | 66.94 | 73.06 | 45.03 | 79.06 | | ZebraLogic<sub>(acc)</sub> | 85.30 | 94.22 | 89.11 | 56.30* | 75.85 | 51.78 | 89.30 | | GraphWalks-128k<sub>(precision)</sub> | 73.54 | 80.72 | 47.50 | 85.02 | 80.57 | 64.83 | 51.05 | | **Coding** | | | | | | | | | LiveCodeBench<sub>(pass@1)</sub> | 56.40* | 46.48 | 46.70 | 39.21 | 45.59 | 39.65 | 48.02 | | Humaneval+<sub>(pass@1)</sub> | 92.68 | 94.51 | 85.98 | 93.29 | 94.51 | 87.80 | 88.41 | | MBPP+<sub>(pass@1)</sub> | 79.89 | 79.89 | 81.75 | 79.37 | 80.16 | 76.19 | 79.63 | | SWE-Bench-Verified<sub>(acc)</sub> | 66.00* | 42.00 | 64.60 | 48.60 | 68.00* | 40.60 | 60.40 | | TerminalBench<sub>(acc)</sub> | 31.30* | 17.28 | 25.93 | 28.40 | 40.74 | 12.35 | 39.51 | | **Agentic Tool Use** | | | | | | | | | τ²-Bench (telecom)<sub>(avg@4)</sub> | 38.50 | 22.50 | 67.50 | 35.20 | 46.20 | 16.50 | 73.68 | | τ²-Bench (airline)<sub>(avg@4)</sub> | 46.00 | 36.00 | 54.20 | 56.00 | 60.00 | 41.50 | 58.00 | | τ²-Bench (retail)<sub>(avg@4)</sub> | 64.90 | 70.50 | 70.80 | 74.10 | 80.00 | 64.80 | 71.27 | | AceBench<sub>(acc)</sub> | 69.70 | 71.10 | 82.20 | 80.10* | 76.20* | 74.50* | 76.10 | | VitaBench<sub>(avg@4)</sub> | 20.30 | 8.50 | 18.20 | 19.00 | 23.00 | 8.00 | 24.30 | | **Safety** | | | | | | | | | Harmful | 82.79 | 80.82 | 53.91 | 56.19 | 66.56 | - | 83.98 | | Criminal | 87.83 | 89.13 | 77.19 | 81.58 | 87.58 | - | 91.24 | | Misinformation | 83.17 | 77.76 | 42.68 | 45.49 | 54.91 | - | 81.72 | | Privacy | 98.80 | 98.80 | 96.39 | 98.80 | 100.00 | - | 93.98 | Note: * Values marked with `*` are sourced from other public reports. * DeepSeek-V3.1, Qwen3-235B-A22B, Gemini2.5-Flash, and Claude4-Sonnet are evaluated under their non-thinking mode. ## Quick Start ### Chat Template The details of our chat template are provided in the `tokenizer_config.json` file. Below are some examples. #### First-Turn With the following prefix, LongCat-Flash can generate responses corresponding to user queries: ``` [Round 0] USER:{query} ASSISTANT: ``` When a system prompt is specified, the prefix will take the following format: ``` SYSTEM:{system_prompt} [Round 0] USER:{query} ASSISTANT: ``` #### Multi-Turn In multi-turn scenarios, the prefix is constructed by concatenating the context with the latest user query: ``` SYSTEM:{system_prompt} [Round 0] USER:{query} ASSISTANT:{response}</longcat_s>... [Round N-1] USER:{query} ASSISTANT:{response}</longcat_s> [Round N] USER:{query} ASSISTANT: ``` Here, N denotes the N-th round of user queries, with indexing starting from zero. #### ToolCall LongCat-Flash supports tool calling in the following format: ``` {tool_description} ## Messages SYSTEM:{system_prompt} [Round 0] USER:{query} ASSISTANT: ``` The tool_description is: ```markdown ## Tools You have access to the following tools: ### Tool namespace: function #### Tool name: {func.name} Description: {func.description} InputSchema: {json.dumps(func.parameters, indent=2)} **Note**: For each function call, return a json object with function name and arguments within <longcat_tool_call></longcat_tool_call> XML tags as follows: <longcat_tool_call> {"name": <function-name>, "arguments": <args-dict>} </longcat_tool_call> When multiple functions need to be called simultaneously, each function call should be wrapped in its own <longcat_tool_call> tag and placed consecutively. For example: <longcat_tool_call> {"name": <function-name>, "arguments": <args-dict>} </longcat_tool_call><longcat_tool_call> {"name": <function-name>, "arguments": <args-dict>} </longcat_tool_call> ``` ## Deployment We have implemented basic adaptations in both SGLang and vLLM to support the deployment of LongCat-Flash. For comprehensive guidance, please refer to the [Deployment Guide](https://github.com/meituan-longcat/LongCat-Flash-Chat/blob/main/docs/deployment_guide.md) in the LongCat-Flash-Chat repository. ## Chat Website You can chat with LongCat-Flash on our official website: [https://longcat.ai](https://longcat.ai). ## License Agreement This repository, including both the model weights and the source code, is released under the **MIT License**. Any contributions to this repository are licensed under the MIT License, unless otherwise stated. This license does not grant any rights to use Meituan trademarks or patents. For details, see the [LICENSE](./LICENSE) file. ## Usage Considerations This model has not been specifically designed or comprehensively evaluated for every possible downstream application. Developers should take into account the known limitations of large language models, including performance variations across different languages, and carefully assess accuracy, safety, and fairness before deploying the model in sensitive or high-risk scenarios. It is the responsibility of developers and downstream users to understand and comply with all applicable laws and regulations relevant to their use case, including but not limited to data protection, privacy, and content safety requirements. Nothing in this Model Card should be interpreted as altering or restricting the terms of the MIT License under which the model is released. ## Contact Please contact us at <a href="mailto:longcat-team@meituan.com">longcat-team@meituan.com</a> or open an issue if you have any questions.
null
[ "mit" ]
null
null
561,862,880,256
null
[ "text-generation" ]
null
[ "LongcatFlashForCausalLM", "AutoModelForCausalLM", "modeling_longcat_flash.LongcatFlashForCausalLM" ]
[ "text" ]
[ "text" ]
[ "text" ]
user
user
[ "user" ]
null
null
null
null
null
null
null
null
null
68913522f16f3c8aaffccf1f
openai/gpt-oss-120b
openai
null
2,333,920
2,333,920
False
2025-08-04T22:33:06
2025-08-26T17:25:03
transformers
3,669
113
null
text-generation
{"parameters": {"BF16": 2167371072, "U8": 118244966400}, "total": 120412337472}
[ ".gitattributes", "LICENSE", "README.md", "USAGE_POLICY", "chat_template.jinja", "config.json", "generation_config.json", "metal/model.bin", "model-00000-of-00014.safetensors", "model-00001-of-00014.safetensors", "model-00002-of-00014.safetensors", "model-00003-of-00014.safetensors", "model-00004-of-00014.safetensors", "model-00005-of-00014.safetensors", "model-00006-of-00014.safetensors", "model-00007-of-00014.safetensors", "model-00008-of-00014.safetensors", "model-00009-of-00014.safetensors", "model-00010-of-00014.safetensors", "model-00011-of-00014.safetensors", "model-00012-of-00014.safetensors", "model-00013-of-00014.safetensors", "model-00014-of-00014.safetensors", "model.safetensors.index.json", "original/config.json", "original/dtypes.json", "original/model--00001-of-00007.safetensors", "original/model--00002-of-00007.safetensors", "original/model--00003-of-00007.safetensors", "original/model--00004-of-00007.safetensors", "original/model--00005-of-00007.safetensors", "original/model--00006-of-00007.safetensors", "original/model--00007-of-00007.safetensors", "original/model.safetensors.index.json", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json" ]
[ 1570, 11357, 7111, 201, 16738, 2089, 177, 65238253568, 4625017896, 4115586736, 4625017888, 4115586752, 4625017896, 4115586696, 4625017856, 4060267176, 4625017896, 4170906304, 4625017896, 4115586752, 4064660808, 4625017896, 4115586736, 54511, 377, 19658, 10544040680, 10488721680, 10488721688, 10488721672, 10488721680, 10433402600, 2316539800, 37796, 98, 27868174, 4200 ]
195,764,040,609
b5c939de8f754692c1647ca79fbf85e8c1e70f8a
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "arxiv:2508.10925", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
null
<p align="center"> <img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-120b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-120b ollama pull gpt-oss:120b ollama run gpt-oss:120b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-120b lms get openai/gpt-oss-120b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-120b huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware. # Citation ```bibtex @misc{openai2025gptoss120bgptoss20bmodel, title={gpt-oss-120b & gpt-oss-20b Model Card}, author={OpenAI}, year={2025}, eprint={2508.10925}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.10925}, } ```
[ "amd/gpt-oss-120b-chatbot", "umint/ai", "MGZON/mgzon-app", "SustainabilityLabIITGN/VayuChat", "fdaudens/gpt-oss-news-agent", "Arphd4/ARK.AI", "Wenxi123/openai-gpt-oss-120b", "nazdridoy/inferoxy-hub", "Humbl3m33/openai-gpt-oss-120b", "umint/o4-mini", "openfree/OpenAI-gpt-oss", "ginipick/FLUXllama", "jatinmehra/PDF-Insight-PRO", "yashgori20/Inhance", "yashgori20/FinLLM-RAG", "SoumyaJ/PdfQnAUsingPinecone", "SoumyaJ/AutoCreateProgramme", "SoumyaJ/AutoCreateProgrammeUsingFile", "userlollolol1/smallai", "milwright/chatui-helper", "RaulGuo1/ttt1", "milwright/test-bot", "subhrajit-mohanty/rag_api", "bsayon/Fitness-AI-Bot", "Muhammad-Umer-Khan/PersonalAssistant", "Kesherat/blade-inspection-demo", "Sriramsr3/InsureRAG", "milwright/chat-adventure-games", "ohOmg/AI_MinuteMate", "CUNYGradCenter/AmigAI-Demo", "ysharma/gradio.chat.app-HFIPs", "Ahmud/Final_Assignment_Template", "ritzy88/MyNewChatApp", "midnitefirefly93/MyNewChatApp", "keithpng/MyNewChatApp", "geraldjan001/MyNewChatApp", "JLYK/Sustainability", "myonis/openai-gpt-oss-120b", "abidlabs/openai-gpt-oss-120b-test", "akhaliq/openai-gpt-oss-120b", "unsafezero/openai-gpt-oss-120b", "KhushParikh/openai-gpt-oss-120b", "ebonivon/Openai-Gpt-oss-120", "Amed2121/openai-gpt-oss-120b", "lukedaduke/openai-gpt-oss-120b", "aeros0ul/openai-gpt-oss-120b", "Muyumba/openai-gpt-oss-120b", "TintinWu2025/openai-gpt-oss-120b-test", "laloadrianmorales/openai-oss-groq", "4face/openai-gpt-oss-120b", "KhangNVT/openai-gpt-oss-120b", "quanvhm/openai-gpt-oss-120b", "Jeslkc/JesChatbot", "fireworks-ai/model-inspector", "Devnik21/openai-gpt-oss-120b", "Liable/openai-gpt-oss-120b", "TheFinancialFox/openai-gpt-oss-120b", "Berinwall69/openai-gpt-oss-120b", "yeeaeee/openai-gpt-oss-120b", "Artemikk2/openai-gpt-oss-120b", "Greff3/openai-gpt-oss-120b", "umairwali6/openai-gpt-oss-120b", "awacke1/GPT-OSS-GPT4o-Multimodal-Gradio-FTW", "SiddhJagani/Jwero-120b", "rhergav/openai-gpt-oss-120b", "roshiai/openai-gpt-oss-120b", "VIDraft/gpt-oss-RAG", "saradwd/openai-gpt-oss-120b", "danypropsy/openai-gpt-oss-120b", "TrixProd/trix-oss-space", "vnanhtuan/openai-gpt-oss-120b", "ginigen/gpt-oss-RAG", "ReallyFloppyPenguin/openai-gpt-oss-120b", "teddy600/openai-gpt-oss-120b", "shalyhinpavel/mycelium", "AiCoderv2/openai-gpt-oss-120b", "Ebelgau/openai-gpt-oss-120b", "groccylu/openai-gpt-oss-120b", "reza1001427/openai-gpt-oss-120b", "Drwallacebreen/openai-gpt-oss-120b", "ramybenaroya/openai-gpt-oss-120b", "Danielser/openai-gpt-oss-120b", "AleaiactaEst1/openai-gpt-oss-120b", "JIMMYFACE/openai-gpt-oss-120b", "eueueueueueu/openai-gpt-oss-120b", "ginigen/FLUXllama", "Crow34/openai-gpt-oss-120b", "furkan314/openai-gpt-oss-120b", "AiCoderv2/ChatGpt", "Farhanlaatif/openai-gpt-oss-120b", "anshugoyal/Audit_Impact", "AlexArapoglu/openai-gpt-oss-120b", "keno1412/openai-gpt-oss-120b", "codedevjk/openai-gpt-oss-120b", "VNS12/Task1_FormulateYourQuestion", "karmaittech/karma_openai_gpt_120b", "VNS12/Task2_ResearchPlanAssistant", "Tj/openai-gpt-oss-120b", "Hammadm27/openai-gpt-oss-120b", "Hammadm27/openai-gpt", "yashlok/openai-gpt-oss-120b", "khizarjamshaidiqbal/openai-gpt-oss-120b", "anonymousuit51/openai-gpt-oss-120b", "Vaibhav09mbm/openai-gpt-oss-120b", "zxper/openai-gpt", "mnadell/41134114Brainstormer", "AlexusI/doctor", "ss8327685/openai-gpt-oss-120b", "mnadell/41134114_Translation", "samsungood/openai-gpt-oss-120b", "mnadell/41134114_counter_sub_arguments", "Serg4451D/gpt-oss-multimodal", "ebonivon/Openai-gpt-oss-120b", "Nova90/openai-gpt-oss-120b", "anonyuit52/openai-gpt-oss-120b", "mountofolives/openai-gpt-oss-120b", "Wazzer221/openai-gpt-oss-120b", "paiut/openai-gpt-oss-120b", "linkedcrawler/openai-gpt-oss-120b", "wwjph2018/openai-gpt-oss-120b", "Rifadul/openai-gpt-oss-120b", "Ebrahimalnono/openai-gpt-oss-120b", "hzz03/lyna_backend", "yinliangc/openai-gpt-oss-120b", "Him40706/openai-gpt-oss-120b", "lsniko/openai-gpt-oss-120b", "yinliangc/openai-gpt-oss-120b_2", "rtjkgr/openai-gpt-oss-120b", "lakkiroy/git-chat", "rtjkgr/m", "VNS12/Task3_ResearchAnalyses", "noeljiwanmall/career_conversation", "BaoKhuong/openai-gpt-oss-120b", "AKV24/GPT", "Chrishyun/OGPT", "Subnaut5482/openai-gpt-oss-120b", "aradfarmani131/first-ai-demo", "jdzjdz/openai-gpt-oss-120b", "namberino/mcq-gen-gpt", "Alhdrawi/openai-gpt-oss-120b", "namberino/mcq-gen-docker", "fdaudens/gpt-oss-agent-cookbook", "Ben000/openai-gpt-oss-120b", "lijan/openai-gpt-oss-120b", "titechking/titech", "Sakamoto-07/openai-gpt-oss-120b", "rajinikanthvadla1/openai-gpt-oss-120b", "rohans1801/SR_NS_Chatbot", "bilalhf/Customer_support_chatbot", "Peppemoio/openai-gpt-oss-120b", "leeroy-jankins/Poppy", "monvil/openai-gpt-oss-120b", "mnadell/3180grammar_and_spellchecker", "Viv528/openai-gpt-oss-120b", "YoAkatsuki/server", "asd23e/openai-gpt-oss-120b", "WebEssentz/openai-gpt-oss-120b", "tradeunifox/openai-gpt-oss-120b", "Reinecker/openai-gpt-oss-120b", "yunfanuy/openai-gpt-oss-120b", "nexple/openai-gpt-oss-120b", "hoangkha1810/gpt-oss-RAG-CyberSoft", "stranzersweb/youtube-financial-digest", "MrInfinexus/TDS-Project-2-Data-Analyst", "Momobako3/openai-gpt-oss-120b", "renpley2/pppposnmd", "yash-ahir/chatbot", "anweshabbose/Udemy_Search_Engine", "prosky2017/openai-gpt-oss-120b", "Park-Hip-02/Legal_RAG_Chatbot", "manhtran01/Chatbot_with_Tools", "Aradfarmaniii/openai-gpt-oss-120b", "Denisijcu/openai-gpt-oss-120b", "nwhamed/space_1", "ritzy88/pm-ai-assistant", "RickyTTT/NewsSpace", "dionyysos99/ada-ai-unified", "23f3004315/data-analyst-agent", "kangwifi/openai-gpt-oss-120b", "franclei0796/openai-gpt-oss-120b", "stegano/openai-gpt-oss-120b", "avinash445/Final_Assignment_Avinash", "baratwaj/openai-gpt-oss-120b", "jatainkumar/ankur_the_agribot", "Muhammad-Umer-Khan/BrightSolutionProfileBot", "VenuGopal8115/gpt-oss-120b-chatbot", "wuhuizgptamd/ai", "AbrahamKlb/youtube-rag-project", "Ninjasharp/ai-mac-app", "MMOON/IFSACTIONPLAN", "mithun1512/openai-gpt-oss-120b", "Barzi73/BarziBoot", "AbhayVG/VayuChat2", "SustainabilityLabIITGN/VayuChatv2", "Ccaca12/gpt-oss-120b-chatbot", "bharathmunakala/exp", "ElJoker63/TITAN", "photis/openai-gpt-oss-120b", "Barzi73/CEO", "DataMine/Maths-Olymps", "Habibahmadgillani/openai-gpt-oss-120b", "Lonewolf-003/openai-gpt-oss-120b", "Harshit2804/GenAI-Chatbot", "ahsancloud/openai-gpt-oss-120b", "Santhosh1511/openai-gpt-oss-120b", "JawedRoomi/BrightSolutionAssistant", "daksh1010/agribot", "MindCraft24729/openai-gpt-oss-120b", "taha454/AidMateLLM", "mnadell/Career_Exploration_for_English_Majors", "yashgori20/ThinklySEO", "anshugoyal/audit_query_to_audit_obs", "CodeHubb/openai-gpt-oss-120b", "siem-mule/openai-gpt-oss-120b", "madhu0810/pdf_reader", "TakiTakiTa/Chatbot", "KanTakahiro/utakata-radio-translate", "umint/openai-gpt-oss-120b", "prthm11/Database_Agent", "huzhou571/openai-gpt-oss-120b", "TakiTakiTa/af32fqfd", "apurv7777/ChatWithMe", "shineshaw/openai-gpt-oss-120b", "Grinding/AudioSummarizer", "TIm1124/Chat_v2_GPT", "TIm1124/RAG_Tokyo_v1-gpt_oss_120b", "Luv88/openai-gpt-oss-120b", "mgbam/yeye", "ryding/HistoPath", "jackyang1021125/openai-gpt-oss-120b", "rishuu300/Multi-Agent-Assistant", "kushjohri1/openai-gpt-oss-120b", "nako-owner/knitting-gauge-calculator", "Rooms/14_HF_Agent_project", "PhaseDOutAI/Persilia-AI", "pangxiang/openai-gpt-oss-120b", "umint/gpt-4.1-nano", "umint/o3", "yogies/chat-guide", "stackway-ai/openwebui", "Felguk/gpt-oss-120b", "Kushal-IIT-KGP/Ankur_AgriBot", "umint/openwebui", "Kamalkshs82/openai-gpt-oss-120b", "rishi-kesh-00/luma" ]
[ "apache-2.0" ]
null
null
120,412,337,472
null
[ "text-generation" ]
null
[ "GptOssForCausalLM", "AutoModelForCausalLM", "gpt_oss" ]
[ "text" ]
[ "text" ]
[ "text" ]
enterprise
company
[ "United States of America" ]
null
null
null
null
null
null
null
null
null
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
54