At Luma Labs, we are introducing a new method called Inductive Moment Matching (IMM) that aims to overcome the stagnation in algorithmic innovation in generative pre-training. IMM provides superior sample quality and a tenfold increase in sampling efficiency compared to diffusion models by enhancing stability and performance. By incorporating maximum mean discrepancy, IMM scales more efficiently and achieves state-of-the-art results on ImageNet 256×256 and CIFAR-10 datasets. Unlike consistency models, IMM is stable and reliable across different hyperparameters and architectures. Our shift towards an inference-first perspective marks the beginning of a paradigm shift towards multi-modal foundation models that fully unlock creative intelligence. Join us in this mission towards innovation.
https://lumalabs.ai/news/inductive-moment-matching