Back to Browse

Implicit Neural Representations: From Objects to 3D Scenes

16.4K views
Jun 15, 2020
26:13

Keynote presented on June 19, 2020 at CVPR in the 2nd ScanNet Indoor Scene Understanding Challenge Slides: http://www.cvlibs.net/talks/talk_cvpr_2020_implicit_scenes.pdf Papers: https://arxiv.org/abs/2003.04618 https://arxiv.org/abs/2003.12406 http://www.cvlibs.net/publications/Schmitt2020CVPR.pdf Abstract: Implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects. The key limiting factor of implicit methods is their simple fully-connected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this talk, I will propose a hybrid model that uses both a neural implicit shape representation as well as 2D/3D convolutions for detailed reconstruction of objects and large-scale 3D scenes. I will further discuss a neural representation for capturing the visual appearance of an object in terms of its surface light field which allows for manipulating the light source and relight the scene using environment maps. Finally, I will show some of our recent efforts towards collecting material information of real world objects which is required for training such models. I will also briefly present the KITTI-360 dataset, a new outdoor dataset with 360 degree sensor information and semantic annotations in 3D and 2D which will be released this summer.

Download

1 formats

Video Formats

360pmp436.2 MB

Right-click 'Download' and select 'Save Link As' if the file opens in a new tab.

Implicit Neural Representations: From Objects to 3D Scenes | NatokHD