keyboard_arrow_up
Semantic Framework for Query Synthesised 3D Scene Rendering

Authors

Sri Gayathri Devi I, Sowmiya Sree S, Jerrick Gerald and Geetha Palanisamy, Anna University, India

Abstract

View synthesis allows the generation of new views of a scene given one or more images. Current methods rely on multiple input images, which are practically not feasible for such applications. While using a single image to create a 3D scene is difficult since it necessitates a solid understanding of 3D settings. To facilitate this, a complete scene understanding of a single-view image is performed using spatial feature extraction and depth map prediction. This work proposes a novel end-to-end model trained on real images without any ground-truth 3D information. The learned 3D features are exploited to render the 3D view. Further, on querying, the target view is generated using the Query network. The refinement network decodes the projected features to in-paint missing regions and generates a realistic output image. The model was trained on two datasets, namely RealEstate10K and KITTI, containing an indoor and outdoor scene.

Keywords

3D Scene Rendering, Differentiable Renderer, Scene Understanding, Quantized Variational AutoEncoder

Full Text  Volume 13, Number 13