NPBG++: Accelerating Neural Point-Based Graphics

CVPR 2022

Ruslan Rakhimov1*, Andrei-Timotei Ardelean1*, Victor Lempitsky1,2, Evgeny Burnaev1,3

1Skolkovo Institute of Science and Technology 2Yandex 3Artificial Intelligence Research Institute
* denotes equal contribution


We present a new system (NPBG++) for the novel view synthesis (NVS) task that achieves high rendering realism with low scene fitting time. Our method efficiently leverages the multiview observations and the point cloud of a static scene to predict a neural descriptor for each point, improving upon the pipeline of Neural Point-Based Graphics in several important ways. By predicting the descriptors with a single pass through the source images, we lift the requirement of per-scene optimization while also making the neural descriptors view-dependent and more suitable for scenes with strong non-Lambertian effects. In our comparisons, the proposed system outperforms previous NVS approaches in terms of fitting and rendering runtimes while producing images of similar quality.

Pipeline overview

Overview of NPBG++

We represent the scene as a point cloud with a view-dependent neural descriptor embedded in each point. During 3d modeling stage, we sequentially process each input view (input image alignment and feature extraction) and apply online aggregation to update the neural descriptors of each point (no fitting). During the novel view synthesis stage, we rasterize the point cloud, pass the rasterization result through the rendering network, and post-process it (output image alignment) to get the novel view.


Modeling view-dependent effects

Overview video (5 min)


    author={Rakhimov, Ruslan and Ardelean, Andrei-Timotei and Lempitsky, Victor and Burnaev, Evgeny},
    title={NPBG++: Accelerating Neural Point-Based Graphics},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},