SVDQuant: 4-Bit Quantization Powers 12B Flux on a 16GB 4090 GPU with 3x Speedup

SVDQuant is a groundbreaking post-training quantization technique that maintains high visual fidelity by quantizing weights and activations of diffusion models to 4 bits, achieving significant memory and speed reduction on a 16GB laptop 4090 GPU. By introducing a low-rank branch, SVDQuant can absorb outliers and alleviate quantization difficulties. The integration of Nunchaku with SVDQuant further optimizes memory usage and latency. Impressively, SVDQuant surpasses other baselines in visual quality and text alignment. The method also seamlessly integrates with LoRA branches without requiring redundant memory access. This innovative approach is revolutionizing AI image generation and model optimization.

https://hanlab.mit.edu/blog/svdquant

To top