In this article, the authors present SMERF, a view synthesis approach that achieves high accuracy in real-time rendering of large scenes. They highlight the tension between explicit scene representations and neural fields, with the latter surpassing the former in quality but being expensive for real-time applications. SMERF overcomes these challenges by using a hierarchical model partitioning scheme and a distillation training strategy. The method allows for full six degrees of freedom navigation within a web browser and achieves real-time rendering on commodity smartphones and laptops. The authors provide demos and showcase the superior performance of SMERF compared to existing models.
https://smerf-3d.github.io/