Wan 2.2 Animate Model Download and Setup (ComfyUI)

To use this workflow in ComfyUI, download the models listed below and place them in the specified folders. Make sure folder names and file names match exactly as shown to prevent load errors.

Main Diffusion Model (GGUF) Model: Wan2.2-Animate-14B-GGUF Download: https://huggingface.co/QuantStack/Wan2.2-Animate-14B-GGUF

Put it here: ComfyUI/models/diffusion_models/

Note: This model is quantized in GGUF format. Choose the version that fits your GPU VRAM:

Q4_K_M β†’ about 10–12 GB VRAM (balanced) Q5_K_S β†’ about 14–16 GB VRAM (recommended for mid-range GPUs) Q6_K β†’ about 20 GB or more VRAM (highest quality) LoRAs lightx2v I2V (animation motion LoRA) Download: https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors

Put it here: ComfyUI/models/loras/

WanAnimate relight LoRA (lighting and realism enhancer) Download: https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/LoRAs/Wan22_relight/WanAnimate_relight_lora_fp16.safetensors

Put it here: ComfyUI/models/loras/

Text Encoder umt5_xxl_fp8_e4m3fn_scaled.safetensors Download: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors

Put it here: ComfyUI/models/text_encoders/

CLIP Vision Encoder clip_vision_h.safetensors Download: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors

Put it here: ComfyUI/models/clip_visions/

VAE wan_2.1_vae.safetensors Download: https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors

Put it here: ComfyUI/models/vae/

Required Custom Nodes Install these custom nodes either through ComfyUI Manager or by cloning them manually into the folder: ComfyUI/custom_nodes/

comfyui_controlnet_aux https://github.com/Fannovel16/comfyui_controlnet_aux

ComfyUI-KJNodes https://github.com/kijai/ComfyUI-KJNodes

ComfyUI-segment-anything-2 https://github.com/kijai/ComfyUI-segment-anything-2

IAMCCS-nodes https://github.com/IAMCCS/IAMCCS-nodes

ComfyUI-VideoHelperSuite https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite

Quick Start Load this workflow in ComfyUI. Upload your reference image and input video. Adjust the positive and negative prompts. Make sure the green points and red points are set up properly in the detection subgraph Make sure the width and height values are multiples of 16. Run the workflow and your final animation will be saved automatically. Conclusion This workflow uses the Wan 2.2 Animate 14B model in GGUF format to bring realistic motion generation into ComfyUI. Match your model quantization level to your GPU memory, install the required nodes, and the workflow will run smoothly.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Willem11341/Wan22ANIMATE

Finetuned
(3)
this model