hey guys about sageattention and flash-attn when i install success but err! what happen?
(env)(base) D:\workSpace\pythonProject\hunyuan1.5_4step
pip show sageattention
Name: sageattention
Version: 2.2.0+cu126torch2.6.0.post3
Summary: Accurate and efficient plug-and-play low-bit attention.
Home-page: https://github.com/thu-ml/SageAttention
Author: SageAttention team
Author-email:
License: Apache 2.0 License
Location: d:\workspace\pythonproject\hunyuan1.5_4step\env\lib\site-packages
Requires:
Required-by:
pip show flash-attn
Name: flash_attn
Version: 2.7.4
Summary: Flash Attention: Fast and Memory-Efficient Exact Attention
Home-page: https://github.com/Dao-AILab/flash-attention
Author: Tri Dao
Author-email: tri@tridao.me
License:
Location: d:\workspace\pythonproject\hunyuan1.5_4step\env\lib\site-packages
Requires: einops, torch
Required-by:
D:\workSpace\pythonProject\hunyuan1.5_4step\env\Scripts\python.exe D:\workSpace\pythonProject\hunyuan1.5_4step\run.py
2025-11-27 22:12:55.089 | INFO | lightx2v.models.networks.hunyuan_video.infer.attn_no_pad::11 - flash_attn_varlen_func_v3 not available
2025-11-27 22:12:55.113 | INFO | lightx2v.models.networks.hunyuan_video.infer.attn_no_pad::29 - sageattn3 not found, please install sageattention first
2025-11-27 22:12:55.832 | INFO | lightx2v.common.ops.attn.flash_attn::15 - flash_attn_varlen_func_v3 not found, please install flash_attn3 first
2025-11-27 22:12:55.833 | INFO | lightx2v.common.ops.attn.flash_attn::21 - torch_mlu_ops not found.
2025-11-27 22:12:55.842 | INFO | lightx2v.common.ops.attn.sage_attn::26 - sageattn3 not found, please install sageattention first
2025-11-27 22:12:55.842 | INFO | lightx2v.common.ops.attn.sage_attn::33 - torch_mlu_ops not found.
2025-11-27 22:12:56.260 | WARNING | lightx2v.utils.quant_utils::7 - qtorch not found, please install qtorch.Please install qtorch (pip install qtorch).
2025-11-27 22:12:56.410 | INFO | lightx2v.utils.set_config:print_config:115 - config:
{
"do_mm_calib": false,
"cpu_offload": true,
"max_area": false,
"vae_stride": [
4,
16,
16
],
"patch_size": [
1,
2,
2
],
"feature_caching": "NoCaching",
"teacache_thresh": 0.26,
"use_ret_steps": false,
"use_bfloat16": true,
"lora_configs": null,
"use_prompt_enhancer": false,
"parallel": false,
"seq_parallel": false,
"cfg_parallel": false,
"enable_cfg": false,
"use_image_encoder": true,
"task": "t2v",
"model_path": "./models",
"model_cls": "hunyuan_video_1.5",
"sf_model_path": null,
"dit_original_ckpt": "./models/hy1.5_t2v_480p_lightx2v_4step.safetensors",
"low_noise_original_ckpt": null,
"high_noise_original_ckpt": null,
"transformer_model_name": "480p_t2v",
"num_channels_latents": 32,
"offload_granularity": "block",
"vae_offload": false,
"qwen25vl_cpu_offload": true,
"siglip_cpu_offload": false,
"byt5_cpu_offload": false,
"infer_steps": 4,
"target_width": 832,
"target_height": 480,
"target_video_length": 81,
"sample_guide_scale": 1,
"sample_shift": 9.0,
"fps": 16,
"aspect_ratio": "16:9",
"boundary": 0.9,
"boundary_step_index": 2,
"denoising_step_list": [
1000,
750,
500,
250
],
"attn_type": "sage_attn2",
"transformer_model_path": "./models\transformer\480p_t2v"
}
2025-11-27 22:12:56.411 | INFO | lightx2v.models.runners.default_runner:init_modules:38 - Initializing runner modules...
2025-11-27 22:12:56.552 | INFO | lightx2v.utils.custom_compiler:_discover_compiled_methods:120 - [Compile] Discovering compiled methods for HunyuanVideo15Model...
2025-11-27 22:12:56.552 | INFO | lightx2v.models.networks.hunyuan_video.model:_load_ckpt:170 - Loading weights from ./models/hy1.5_t2v_480p_lightx2v_4step.safetensors
Process finished with exit code -1073741819 (0xC0000005)