Why INT4 and INT8?
Curious about the choice for these quantizations. When upsampled to 8192 x 4096 or half of that detailes get all splotchy and ugly. My guess is the lack of range compared to fp8 causes these images to become unusable.
Luckily QwenImage is capable enough of punching it up to a usable resolution without the lora and with some crop and stitch inpainting.
@SDuser23487 What quant version of Qwen Image are you using? INT8 is equal to GGUF Q8 in terms of precision and quality, and they are the highest fp8 precision quants available. GGUF Q8 and INT8 are also extremely close to BF16, and in some cases produce the same results.
The LoRA models themselves were trained in the same precision as the base model or higher (bf16 & fp32). The 'int8' or 'int4' in the filename denotes the quantization of the model they were trained on.
Maybe I wrote my initial comment poorly so I try to rephrase it. Sorry for the confusion.
Prescision is not the issue. The issue (I believe) is that INT8 just does not have the dynamic numerical range which is needed to work with very fine details in 2048x1024 > 8192x 4096.
FP8 can work with very large or very small numerical values. INT can't. Ontop of that INT is very sensitive to quantization noise.
I also want to rephrase that this has nothing to do with the grid artifacts, which by the way are fixed by chosing the right LoRA, as your instruction suggests.
Anyways. Thak you very much for these great LoRAs. The edge performance, adherance to a 4-sided equirectangular 360° layout is outstanding.
